Application deploys successfully but seeing this in deploy messages: WARNING The app is not listening on the expected address

My application deploys successfully. Tonight I am trying to add service and machine checks to my config files but having no luck - they always fail. I noticed the error below in the deploy responses, which I guess explains the check failures:

WARNING The app is not listening on the expected address and will not be reachable by fly-proxy.
You can fix this by configuring your app to listen on the following addresses:
  - 0.0.0.0:3000
Found these processes inside the machine with open listening sockets:
  PROCESS        | ADDRESSES                             
-----------------*---------------------------------------
  /.fly/hallpass | [fdaa:9:61c2:a7b:348:224e:7b79:2]:22

But I am deploying a rails app with the following:

[processes]
  app = "bundle exec rails s -b 0.0.0.0 -p 3000"
  worker = "bundle exec sidekiq"

And my logs show that the application is listening on 0.0.0.0:3000

|||2025-07-11 00:18:12.684|* Listening on http://0.0.0.0:3000||
| --- | --- | --- | --- | --- |
|||2025-07-11 00:17:53.258|* Listening on http://0.0.0.0:3000||
|||2025-07-11 00:17:53.258|* Listening on http://0.0.0.0:3000||
|||2025-07-11 00:17:39.739|* Listening on http://0.0.0.0:3000||
|||2025-07-11 00:11:57.654|* Listening on http://0.0.0.0:3000||
|||2025-07-11 00:11:46.058|* Listening on http://0.0.0.0:3000||
|||2025-07-11 00:11:24.712|* Listening on http://0.0.0.0:3000||
|||2025-07-11 00:07:26.487|* Listening on http://0.0.0.0:3000||
|||2025-07-11 00:04:23.141|* Listening on http://0.0.0.0:3000||
|||2025-07-10 23:58:03.665|* Listening on http://0.0.0.0:3000||
|||2025-07-10 23:52:35.197|* Listening on http://0.0.0.0:3000||
|||2025-07-10 23:48:54.792|* Listening on http://0.0.0.0:3000||
|||2025-07-10 23:41:01.601|* Listening on http://0.0.0.0:3000||
|||2025-07-10 23:39:55.128|* Listening on http://0.0.0.0:3000||
|||2025-07-10 23:36:16.322|* Listening on http://0.0.0.0:3000||

Here’s my TOML file:

app = "planapp"
primary_region = "mia"
console_command = "bin/rails console"

[build]
  dockerfile = "Dockerfile.web"
  build-target = "deploy"

[build.args]
  APP_URL = "https://cims2.floridacims.org"
  RAILS_ENV = "production"
  RACK_ENV = "production"
  APPUID = "1000"
  APPGID = "1000"

[deploy]
  release_command = "./bin/rails db:prepare"

[http_service]
  internal_port = 3000
  force_https = true
  processes = ["app"]
  auto_stop_machines = "stop"
  auto_start_machines = true
  min_machines_running = 2

[[statics]]
  guest_path = "/rails/public"
  url_prefix = "/"

[[vm]]
  size = "shared-cpu-2x"
  memory = '2gb'
  processes = ['app']

[[vm]]
  size = "shared-cpu-2x"
  memory = '2gb'
  processes = ['worker']

[processes]
  app = "bundle exec rails s -b 0.0.0.0 -p 3000"
  worker = "bundle exec sidekiq"

And here are the checks I was trying to add:

[[http_service.checks]]
  grace_period = "10s"
  interval = "30s"
  method = "GET"
  timeout = "5s"
  path = "/upcheck"

[[http_service.machine_checks]]
  image = "curlimages/curl"
  entrypoint = ["/bin/sh", "-c"]
  command = ["curl http://[$FLY_TEST_MACHINE_IP]:3000/upcheck | grep 'production application is up and running'"]
  kill_signal = "SIGKILL"
  kill_timeout = "5s"

Note I get the warning whether I have the checks in the config or not.

Appreciate any help.

1 Like

I have the same issue for an Elixir Phoenix app.

1 Like

Hey there, I think this might be because you have two process groups app and worker, and your fly.toml doesn’t specify that the http_service section applies only to the app Machines.

So the worker Machines are being started and our proxy is expecting them to be listening on port 3000 but they’re not because they’re worker nodes!

If you have a look at these docs (second code block), you’ll see what I think is needed, which is:

[http_service]
  ...
  processes = ["app"]
  ...

I thought about that and I think I made the change you suggest last night to check, with no luck, but I will give it a try today again just to make sure. Thanks for the response!

So, I managed to get the checks running successfully, with your advice and also make some changes in Rails config for the healthcheck URL. So everything appears to be running well, with the toml file below.

However, that message is still in the deployment logs. I don’t know, I guess just ignore it.

Thanks for your help!

[build]
  dockerfile = "Dockerfile.web"
  build-target = "deploy"

[build.args]
  RAILS_ENV = "staging"
  RACK_ENV = "staging"
  APPUID = "1000"
  APPGID = "1000"

[deploy]
  processes = ["app"]
  release_command = "./bin/rails db:prepare"

[http_service]
  processes = ["app"]
  internal_port = 3000
  auto_stop_machines = "stop"
  auto_start_machines = true
  min_machines_running = 2

[[http_service.checks]]
  processes = ['app']
  grace_period = "10s"
  interval = "30s"
  protocol = "http"
  method = "GET"
  timeout = "5s"
  path = "/up"

[[http_machine.checks]]
  processes = ['app']
  grace_period = "30s"
  image = "curlimages/curl"
  entrypoint = ["/bin/sh", "-c"]
  command = ["curl http://[$FLY_TEST_MACHINE_IP]/up | grep 'background-color: green'"]
  kill_signal = "SIGKILL"
  kill_timeout = "5s"

[[vm]]
  processes = ["app"]
  size = "shared-cpu-2x"
  memory = '2gb'

[[vm]]
  processes = ["worker"]
  size = "shared-cpu-2x"
  memory = '2gb'

[[statics]]
  guest_path = "/rails/public"
  url_prefix = "/"

[processes]
  app = "bundle exec rails s -b 0.0.0.0 -p 3000"
  worker = "bundle exec sidekiq"

Ok I’m looking at your machines in our db and 4 of the app ones have the following CMD (as is above in your fly.toml):

["bundle", "exec", "rails", "s", "-e", "production", "-b", "0.0.0.0", "-p", "3000"]

There are also 6 app Machines that have:

["bundle", "exec", "rails", "s"]

So that seems like the problem. Did you leave some old Machines hanging around after a weird deploy possibly?

If you’re using bluegreen and it fails half way through, you can end up with up to twice as many Machines as you had before the deploy began, because the deletion of the old Machines is the final step taken by a bluegreen deploy.

1 Like

Well that’s interesting, but sounds likely. I’m not using bluegreen (yet) but I have been bringing machines up and down like crazy in the staging env for the past few days. Still learning a bunch here. How do I update the machines to have the latest - just drop them and rebuild them?

Yeh I’d identify the Machines with the wrong command: use fly m ls and look at the IMAGE column - the old Machines are probably running an old image (you can determine the most recent image using fly status; near the top you’ll see Image = ...).

Once you know the 6 old Machines, use fly m destroy --force [MACHINE_IDS].

Then if you want, you can fly scale count ... back to the right number of Machines.

Great, done. I’ve dropped and recreated all the machines, and added bluegreen deployment config setting. Seems to have done the trick. Thanks so much for the help.

1 Like