I have an app that’s attached to a postgres cluster as a primary database. This connection works fine and I’ve had no issues with that. However, I’ve recently tried to add a 2nd instance that I wish to serve database-backed jobs (solidqueue, in this instance).
I am getting an error when attempting production deployment:
✖ Failed: machine 7819696f470118 exited with non-zero status of 1
2026-01-10T23:34:59Z 2026-01-10T23:34:59.786521825 [01KEN46NN10Q77XNQDZKHBHR1P:main] Running Firecracker v1.12.1
2026-01-10T23:34:59Z 2026-01-10T23:34:59.786729711 [01KEN46NN10Q77XNQDZKHBHR1P:main] Listening on API socket ("/fc.sock").
2026-01-10T23:35:00Z INFO Starting init (commit: 6f59af0a)...
2026-01-10T23:35:00Z INFO Preparing to run: `/rails/bin/docker-entrypoint bin/rails db:migrate db:create:queue db:migrate:queue` as 1000
2026-01-10T23:35:00Z INFO [fly api proxy] listening at /.fly/api
2026-01-10T23:35:01Z Machine started in 1.282s
2026-01-10T23:35:01Z 2026/01/10 23:35:01 INFO SSH listening listen_address=[fdaa:1a:108b:a7b:4a5:bb3c:ff1d:2]:22
2026-01-10T23:35:06Z no implicit conversion of nil into String
2026-01-10T23:35:06Z Couldn't create '' database. Please check your configuration.
As you can see, it’s trying to create a database with a nil name. This is a rails 8 app, and I’m using database.yml to define a multi-db configuration:
production:
primary:
<<: *default
url: <%= ENV["DATABASE_URL"] %>
migrations_paths: db/migrate
role: primary
queue:
<<: *default
url: <%= ENV["SOLIDQUEUE_DATABASE_URL"] %>
migrations_paths: db/queue_migrate
role: queue
My review apps were working and connecting fine with another instance that I have. The only difference that I can see with these is that I’m using the fly-pr-review-apps action for deployment. (GitHub - superfly/fly-pr-review-apps: Github Action for PR Review Apps on Fly.io)
- name: Deploy PR app to Fly.io
id: deploy
uses: superfly/fly-pr-review-apps@1.5.0
with:
config: 'fly.review.toml'
name: ${{ env.FLY_APP }}
path: '.'
postgres: ${{ env.REVIEW_DB_NAME }}
launch_options: '--dockerfile ./Dockerfile -y'
secrets: |
RAILS_ENV=staging DATABASE_URL=${{ secrets.DATABASE_URL_REVIEW }} SOLIDQUEUE_DATABASE_URL=${{ secrets.SOLIDQUEUE_DATABASE_URL_REVIEW }} RAILS_MASTER_KEY=${{ secrets.RAILS_STAGING_KEY }}
The postgres attachment here is only relevant for the primary db, it was not running an attachment for the review solidqueue db, but it still seemed to work with the connection string anyway (as I would expect).
Production, however, is not the same and is failing with the aforementioned error. The production deployment is being carried out by:
- uses: actions/checkout@v5
- uses: superfly/flyctl-actions/setup-flyctl@master
- run: flyctl deploy --remote-only
I modified this to provide the following build-arg in the hope that the primary database was being rescued by some of the default DATABASE_URL setting that happens under the hood with postgres attach and with the review deployment action.
I connected to a machine within the application with fly console and validated that the secrets required were present and defined as they should be, but I’m guessing that these are runtime secrets?
The release command I’m running is
release_command = "bin/rails db:migrate db:create:queue db:migrate:queue"
There’s a small change with the release command for the review app as I’ll want to be seeing etc.
Running ***-pr-127 release_command: bin/rails db:migrate:primary db:seed db:migrate:queue
> Created release_command machine d8d31e3c550998
> Waiting for d8d31e3c550998 to have state: stopped
Starting machine
> Machine d8d31e3c550998 has state: stopped
> Waiting for d8d31e3c550998 to have state: started
> Waiting for d8d31e3c550998 to have state: started
> Machine d8d31e3c550998 has state: started
> Waiting for d8d31e3c550998 to have state: destroyed
> Machine d8d31e3c550998 has state: destroyed
> Waiting for d8d31e3c550998 to get exit event
✔ release_command d8d31e3c550998 completed successfully
Ultimately I’d be removing the create from the production release after successful deployment and it would be just be running a migration on the 2 instances.
That being said, I’m at a loss as to what I’ve missed here. I would have imagined I’m probably missing something in the land of Docker that is being covered by the fly review deployment action, but I can’t spot anything from the entrypoint.
I’m fully convinced that I’ve just been looking at this for so long that I’ve missed something massively obvious, but would appreciate some help if anybody can spot what I am doing wrong ![]()