Preview: multi process apps (get your workers here!)

I would also like to learn how to run multiple workers in one vm (for the shared volume use case specifically).

Oh shoot, sorry for misleading you @arttii. I didn’t absorb at all that this was without a supervisor.

Works fine for me.

I hope dashboard metrics can be displayed separately by process.

@jbergstroem you can do this with a process supervisor within your Docker Image. We use this one for some things: GitHub - DarthSim/overmind: Process manager for Procfile-based applications and tmux

Building those images can be tricky. We have a custom process supervisor in our Postgres app, and a multistage Dockerfile for building our Go based package that manages it: postgres-ha/Dockerfile at main · fly-apps/postgres-ha · GitHub

Ah; gotcha. I thought there was some kind of abstraction in fly.toml I was missing out on. Thanks for that.

Oh not yet! We have an idea for sidecars, which would solve that. No work has started but I think we know how to do it.

3 Likes

Is there a way to call a specific process when hitting app-name.internal from another app?

It seems that if one process has a volume and the other not, there is no obvious way to make the processes only run in certain regions. Am I missing something?

For me, it works just by calling the different ports

I see this example mentions sidekiqswarm. How are you setting the bundle config enterprise.contribsys.com username:password so that Sidekiq Enterprise can be installed and used?

loving the processes feature, please can you add a label to the prometheus metrics with the process name for easy filtering by scaling group in grafana, many thanks

2 Likes

Hi team! This is probably a very dumb question, but I have set up multiple processes inspired by the docs:

[processes]
  web = "bin/rails fly:server"
  worker = "bundle exec good_job start"
...
[[services]]
  ...
  processes = ["web"]

Suppose my VM has 1GB memory set, information I can find in the Overview. When I go to the Scale section in the GUI, I have my 2 processes, each with 1GB.
Does it mean that each one of them is allowed to use all the VM’s memory?
How can I scale up/down my VM globally? When I want to scale my deployment, I now need to specify a --group. Should I run the scalecommand twice, with each group?

Each process is its own isolated VM so in your example web would have 1gb to work with and worker would have 1gb to work with.

Yes I would specify a group with your scale commands for memory and vm when you have multiple processes. For fly scale count you can specify per-group all at once: fly scale count web=2 worker=1

Thanks for your answer. Unfortunately, my scaling commands does not seem to be taken into account (from a GitHub Action).
I have tried a dozen combination without success:

---
        - run: flyctl scale count web=1 worker=1 --app=my-app
        - run: flyctl scale memory 512 --app=my-app --group=web
        - run: flyctl scale memory 256 --app=my-app --group=worker
---
        - run: flyctl scale vm shared-cpu-1x --memory=512 --app=my-app --group=web
        - run: flyctl scale vm shared-cpu-1x --memory=256 --app=my-app --group=worker
---
        - run: flyctl scale vm shared-cpu-1x --memory=512 --app=my-app --group=web
---
        - run: flyctl scale vm shared-cpu-1x --memory=512 --app=my-app --group=web
        - run: flyctl scale vm shared-cpu-1x --memory=512 --app=my-app --group=worker
---
        - run: flyctl scale memory 512 --app=my-app --group=web
        - run: flyctl scale memory 512 --app=my-app --group=worker
---

I also tried these before and after the deploy command.

No matter what, the app scales at 256MB globally, making it crash. And it’s an Action from which I’m destroying and re-building the app from scratch.

Edit: scaling app globally right after apps create seems to work…

May I ask why you destroy and recreate the app from scratch?

This is related to Simulating PR Review Apps via GitHub Actions where alternative approaches I tested were very unstable and even now this GitHub Action breaks often.

As mentioned in that topic, I am totally inclined to change that approach to save resources, but I already spent a lot of time to reach that result. So if you have a solution as stable as the current one that consumes less resources, I’ll go for it happily.

I think this might be a UI issue at the moment. If the UIs are at fault we have two other ways to verify: our API and your VM.

For the API case, if you hop into GraphQL Playground and enter the query:

query {
  app(name: "your-app-name") {
    processGroups {
      name
      vmSize {
        memoryMb
      }
    }
  }
}

I think (/hope!) you’ll see the memoryMb set to what you’re expecting.

More importantly, we can check your VM by running flyctl ssh console -a your-app and running free -m. If free isn’t a command on your VM, you can also tinker with cat /proc/meminfo. There are other ways to check from your VM if those don’t work for you.

3 Likes

Processes don’t work quite like we want with DNS. It’s part of why we haven’t “officially” launched this feature yet! Right now, all processes in the same app show on <app>.internal.

This just caused problems for us. We have an app that has worker and web processes and another app that needs to connect to only the web processes. When we deployed the multi-process configuration, we started getting ECONNREFUSED because the DNS query top3.nearest.of.my-app.internal was returning a mix of web and worker IPs.

It would be great to be able to specify the process type in the DNS query: top3.nearest.of.web.process.my-app.internal.

@jamesarosen wow this is impeccable timing. We were literally just talking about this issue. I figured it would crop up, but almost everyone using [processes] is not using internal networking. So it’s good you posted because now we have a “yes, this is an actual problem” fact. :slight_smile:

3 Likes

+1 on this. I was just trying to build a dashboard when I realized that this wasn’t an option. :frowning:

1 Like