Feedback/requests from a Rails perspective: apps grouped with a shared deployment story, one-off apps/machines, and pipelines

Coming to Fly as a Rails developer used to Heroku, there are a few things I feel like I’m currently missing from Fly that would be really nice to have:

  1. Procfile deployments
  2. One-off dynos
  3. Pipelines

Procfile deployments

The first is basically Heroku’s deployment strategy with Procfiles. With the Procfile, you can define your web worker which gets instrumented with the HTTP health checks, and background workers like sidekiq and a job drain (which don’t have these health checks), and they automatically share the same container image, and same environment variables. They essentially only vary on the ENTRYPOINT/CMD and the health checks.

But, importantly, every time I make a source code change (which builds a new image), or I change a single environment variable, everything in my Procfile gets automatically deployed in tandem.

Compared to Fly, I either have to put all separate processes inside the same app, or make completely separate, independent apps (per the guide).

For a serious app, I wouldn’t consider the former solution of a single shared image/app using a supervisor process. It bloats the image, requiring more resources to run every instance of the app, makes horizontal scaling coarser-grained (you can’t just scale “web” processes independently of the other processes), and it seems likelier to risk latency spikes on the web process if the other processes get bogged down with their job processing.

Making completely independent apps is preferable to me, but it’s a pain and not an atomic deployment process. This is an example of what I’m doing right now with a Rails app named “foo” with 3 different processes - the web worker, a sidekiq worker and a job drain:

  • I create 3 separate apps:

  • foo-staging-web’s TOML file has a [[services]] section with exposed ports, health checks, etc., and I deploy it builder-style with fly deploy --remote-only

  • The TOML files for the other two apps are much simpler; they basically just set the ENTRYPOINT/CMD via the SERVER_COMMAND env var:

    app = "foo-staging-sidekiq"
    kill_signal = "SIGINT"
    kill_timeout = 5
    processes = []
      SERVER_COMMAND = "bundle exec sidekiq -c 16"
  • To create and deploy the sidekiq and job drain apps after the web app has deployed, I do this for each:

    $ fly apps create foo-staging-sidekiq
    $ fly secrets import -a foo-staging-sidekiq < .env.staging
    $ DOCKER_IMAGE=$(fly image show -a foo-staging-web --json | jq -r '(.Registry) + "/" + (.Repository) + ":" + (.Tag)')
    $ fly deploy --image $DOCKER_IMAGE -c deploy/staging-sidekiq.toml

    This deploys using the image already-built for the web service.

  • Note I have to keep track of environment variables externally to Fly (in my .env.staging file), because if I make a change to one app, I have to apply that change separately to each app with fly secrets. There is no lateral sharing of secrets between apps, nor can you retrieve them again through the CLI (only by SSHing into a running app and dumping the runtime env).

So as you can see the deploy process is a bit more involved than the Heroku equivalent due to Fly apps being totally self-contained units. It would be nice if Fly would have an abstraction that made linking these together similar to Heroku’s Procfile.

One-off apps/machines/etc

I’d really like to mimic the Heroku experience of getting a live Rails console for an existing Rails app via something akin to heroku run rails console or heroku run bash.

On Heroku what this does is spin up a brand new dyno (micro-VM), attaches your console to it, and when you disconnect, deletes the whole thing (i.e. it’s ephemeral). This is also useful for long-running one-off jobs, say if you’re backfilling data into a new table; you can simply run a rails runner script, and once it completes, the micro-VM evaporates completely.

I will say flyctl ssh console works pretty well here to attach to a running microVM, and I can run rails console from there, but it feels a little unclean to be doing that on a microVM that is serving web traffic (what if I fat finger something and start impacting the web server resources?), and I really wouldn’t want to run a resource-intensive task like a table backfill job on there due to potential impact.

I’ve seen the new Machines API mentioned on this forum as well as on HN as a potential solution to this, but after reading the blog post announcement, I’m not connecting the dots on how it does so (perhaps it will reveal itself to me in good time).

If I’m not mistaken, I’d still have to do something like this to start up a one-off microVM with the same environment as my Rails app, and clean it up myself afterward:

$ fly apps create --machines --generate-name
New app created: delicate-snow-7276
$ fly secrets import -a delicate-snow-7276 < .env.staging
$ DOCKER_IMAGE=$(fly image show -a foo-staging-web --json | jq -r '(.Registry) + "/" + (.Repository) + ":" + (.Tag)')
$ fly deploy --image $DOCKER_IMAGE -a delicate-snow-7276 --env SERVER_COMMAND=bash
==> Verifying app config
--> Verified app config
==> Building image
Searching for image '' remotely...
image found: img_3mno4w6g0wmpk18q
Deploying with rolling strategy
Taking lease out on VM 3287359b6d3685
Updating VM 3287359b6d3685
Waiting for update to finish on 3287359b6d3685
$ fly ssh console -a delicate-snow-7276
# (do stuff)
$ fly apps destroy delicate-snow-7276 -y

A lot of that work is repeated from the Procfile section above, but it would be nice to also have the “create an ephemeral machine” parts made easier


The last nice-to-have would be something like Heroku Pipelines. What that is, to me, is basically a way to have separate environments, with different env vars, but that share the same container image.

So I can have a “staging” environment, a “qa” environment, and a “prod” environment, and promote a built container image laterally between the different environments (i.e. stages of the pipeline).

This can mostly be done right now by namespacing app names:




and keeping namespaced master secrets files:


Then using the $ fly deploy --image $DOCKER_IMAGE ... and fly secrets import ... strategies I’m using above, but as you can tell, the effort is multiplied this way compared to heroku pipelines:promote -r staging


This is where I’m at now after having put a few solid days into using Fly. Please do let me know if I’ve made errors in my conclusions, and forgive me if some of this stuff is already mentioned elsewhere, but I wanted to provide a coherent narrative from my own POV as well.

I expect some of this pain to go away when I implement GitHub Actions to handle deployments and promotions, but hopefully Fly can continue to improve its ergonomics and become a happy home for Rails developers.


I started working at Fly a few weeks ago with the initial goal of making Fly as easy (hopefully easier) than Heroku :slight_smile:

Yeah, I love this about Heroku. Fly does it a bit differently with the processes directive in the fly.toml file and it’s kind of buried in the documentation. I recently documented it for Sidekiq at Sidekiq background workers · Fly Docs and App Configuration (fly.toml) · Fly Docs. There’s a few bugs that were just fixed with respect to multi-region scaling, so I’m going to update those docs.

Me too! I wrote some one-liners at Running tasks & consoles · Fly Docs, but I agree, it doesn’t feel as clean as running a console from a machine that’s not serving up traffic. I’m pushing for something like fly exec rails console internally, but I’m not sure yet where that will lead. I should probably open an issue for that on Github and get more community responses for the proposal.

Machines is new and there’s going to be a LOT more use cases it opens up for everybody, including streamlining the ephemeral machine workflow.

I haven’t gotten this far yet, but I’m aware of Heroku’s Pipelines and want to write a guide on multiple environments on Fly. If this workflow is working for you pretty well, I may use it as the basis for this docs.

Thank you for doing this! Your feedback is really helpful and will make Fly much better for future Rails developers. I never blame users, so if there’s anybody that needs to be forgiven its me for not having coherent docs for you (and others) to follow.

It will! Next week we’ll have two people working full-time on making Rails work great on Fly with an awesome Dev UX. You can expect a “more heroku like” experience over the next few months :slight_smile:


Yes, but processes also have an issue where they do not work with autoscaling.


Ah so I did miss something then. That’s funny, I actually remember scanning the “get your processes here” post but IIRC based on the “processes” name and the old post date I figured it was some kind of sugar for a supervisor process thing, which I didn’t want. Now actually reading more closely I see it does actually spin up different microVMs. Thanks!

One question about this - can I write something like this in my fly.toml and it’ll work?

  processes = ["jobdrain"]
  strategy = "rolling"

I need to use a rolling deploy strategy for my job drain because it uses a Postgres advisory lock and it can’t handle n>1 instances running (so canary deploys fail). Canary for the other stuff is fine though

Ooh good to know

+100 to improvements for multi environment deployments @abe

We’re deploying our applications across 3 environments in exactly the same way with namespace prefixes companyname-environment-application-name.

I feel like a possible quick win here would be the ability to deploy a docker image from another organization so that we only have to build the image once across all environments.

@brad would it be possible to use subdomains for namespace prefixing? e.g.


Not sure on sub-sub-domains (is that even a word?). Would a naming scheme like #{app}-#{company}-#{env} be sufficient? I know that would work with Fly. FWIW we had to do the same thing at my last company because SSL certs for sub-sub-domains become a pain.