Coming to Fly as a Rails developer used to Heroku, there are a few things I feel like I’m currently missing from Fly that would be really nice to have:
- Procfile deployments
- One-off dynos
- Pipelines
Procfile deployments
The first is basically Heroku’s deployment strategy with Procfiles. With the Procfile, you can define your web worker which gets instrumented with the HTTP health checks, and background workers like sidekiq and a job drain (which don’t have these health checks), and they automatically share the same container image, and same environment variables. They essentially only vary on the ENTRYPOINT/CMD and the health checks.
But, importantly, every time I make a source code change (which builds a new image), or I change a single environment variable, everything in my Procfile gets automatically deployed in tandem.
Compared to Fly, I either have to put all separate processes inside the same app, or make completely separate, independent apps (per the guide).
For a serious app, I wouldn’t consider the former solution of a single shared image/app using a supervisor process. It bloats the image, requiring more resources to run every instance of the app, makes horizontal scaling coarser-grained (you can’t just scale “web” processes independently of the other processes), and it seems likelier to risk latency spikes on the web process if the other processes get bogged down with their job processing.
Making completely independent apps is preferable to me, but it’s a pain and not an atomic deployment process. This is an example of what I’m doing right now with a Rails app named “foo” with 3 different processes - the web worker, a sidekiq worker and a job drain:
-
I create 3 separate apps:
foo-staging-web foo-staging-sidekiq foo-staging-jobdrain
-
foo-staging-web
’s TOML file has a[[services]]
section with exposed ports, health checks, etc., and I deploy it builder-style withfly deploy --remote-only
-
The TOML files for the other two apps are much simpler; they basically just set the ENTRYPOINT/CMD via the SERVER_COMMAND env var:
app = "foo-staging-sidekiq" kill_signal = "SIGINT" kill_timeout = 5 processes = [] [env] SERVER_COMMAND = "bundle exec sidekiq -c 16"
-
To create and deploy the sidekiq and job drain apps after the web app has deployed, I do this for each:
$ fly apps create foo-staging-sidekiq $ fly secrets import -a foo-staging-sidekiq < .env.staging $ DOCKER_IMAGE=$(fly image show -a foo-staging-web --json | jq -r '(.Registry) + "/" + (.Repository) + ":" + (.Tag)') $ fly deploy --image $DOCKER_IMAGE -c deploy/staging-sidekiq.toml
This deploys using the image already-built for the web service.
-
Note I have to keep track of environment variables externally to Fly (in my
.env.staging
file), because if I make a change to one app, I have to apply that change separately to each app withfly secrets
. There is no lateral sharing of secrets between apps, nor can you retrieve them again through the CLI (only by SSHing into a running app and dumping the runtime env).
So as you can see the deploy process is a bit more involved than the Heroku equivalent due to Fly apps being totally self-contained units. It would be nice if Fly would have an abstraction that made linking these together similar to Heroku’s Procfile.
One-off apps/machines/etc
I’d really like to mimic the Heroku experience of getting a live Rails console for an existing Rails app via something akin to heroku run rails console
or heroku run bash
.
On Heroku what this does is spin up a brand new dyno (micro-VM), attaches your console to it, and when you disconnect, deletes the whole thing (i.e. it’s ephemeral). This is also useful for long-running one-off jobs, say if you’re backfilling data into a new table; you can simply run a rails runner
script, and once it completes, the micro-VM evaporates completely.
I will say flyctl ssh console
works pretty well here to attach to a running microVM, and I can run rails console
from there, but it feels a little unclean to be doing that on a microVM that is serving web traffic (what if I fat finger something and start impacting the web server resources?), and I really wouldn’t want to run a resource-intensive task like a table backfill job on there due to potential impact.
I’ve seen the new Machines API mentioned on this forum as well as on HN as a potential solution to this, but after reading the blog post announcement, I’m not connecting the dots on how it does so (perhaps it will reveal itself to me in good time).
If I’m not mistaken, I’d still have to do something like this to start up a one-off microVM with the same environment as my Rails app, and clean it up myself afterward:
$ fly apps create --machines --generate-name
New app created: delicate-snow-7276
$ fly secrets import -a delicate-snow-7276 < .env.staging
$ DOCKER_IMAGE=$(fly image show -a foo-staging-web --json | jq -r '(.Registry) + "/" + (.Repository) + ":" + (.Tag)')
$ fly deploy --image $DOCKER_IMAGE -a delicate-snow-7276 --env SERVER_COMMAND=bash
==> Verifying app config
--> Verified app config
==> Building image
Searching for image 'registry.fly.io/foo-staging-web:deployment-1659746829' remotely...
image found: img_3mno4w6g0wmpk18q
Deploying with rolling strategy
Taking lease out on VM 3287359b6d3685
Updating VM 3287359b6d3685
Waiting for update to finish on 3287359b6d3685
$ fly ssh console -a delicate-snow-7276
# (do stuff)
$ fly apps destroy delicate-snow-7276 -y
A lot of that work is repeated from the Procfile section above, but it would be nice to also have the “create an ephemeral machine” parts made easier
Pipelines
The last nice-to-have would be something like Heroku Pipelines. What that is, to me, is basically a way to have separate environments, with different env vars, but that share the same container image.
So I can have a “staging” environment, a “qa” environment, and a “prod” environment, and promote a built container image laterally between the different environments (i.e. stages of the pipeline).
This can mostly be done right now by namespacing app names:
foo-staging-web
foo-staging-sidekiq
foo-staging-jobdrain
foo-qa-web
foo-qa-sidekiq
foo-qa-jobdrain
foo-prod-web
foo-prod-sidekiq
foo-prod-jobdrain
and keeping namespaced master secrets files:
.env.staging
.env.qa
.env.prod
Then using the $ fly deploy --image $DOCKER_IMAGE ...
and fly secrets import ...
strategies I’m using above, but as you can tell, the effort is multiplied this way compared to heroku pipelines:promote -r staging
Conclusion
This is where I’m at now after having put a few solid days into using Fly. Please do let me know if I’ve made errors in my conclusions, and forgive me if some of this stuff is already mentioned elsewhere, but I wanted to provide a coherent narrative from my own POV as well.
I expect some of this pain to go away when I implement GitHub Actions to handle deployments and promotions, but hopefully Fly can continue to improve its ergonomics and become a happy home for Rails developers.