I’m running into an issue where the Fly.io dashboard does not reflect changes made to my fly.toml, and the platform seems to ignore the updated config.
In the Fly.io dashboard under the fly.toml tab, I still see:
After deployment, the dashboard still shows the old version.
But when I run: fly config show
I see the updated configuration with auto_stop_machines = "suspend".
Despite this, Machines continue to stop instead of suspend, so it looks like the updated config isn’t being applied at runtime even though the CLI reports it correctly.
Is this a dashboard caching issue, or is Machines ignoring the new config?
Do I need an additional step to force a config sync?
So the CLI and the machine details reflect the updated configuration, but the Fly.io dashboard still shows incorrect values. However, the machines still fully stop instead of suspending — this is the main issue I’m trying to solve.
Hm… I’m not seeing any such discrepancies in my own apps, so this might be a localized glitch. (The ord region has been short on capacity lately, although that shouldn’t really lead to this kind of anomaly, .)
Here are a few things that I would try…
Use fly m start and then fly m suspend to manually confirm that the Machines really are eligible to suspend. (Not all Machines are compatible with that feature, and you don’t get a noisy warning when they aren’t.)
Use fly m clone to create a new† Machine, and see if that one correctly auto-suspends over the next few minutes.
As above, but clone into the ewr region.‡
Post the logs (fly logs) from around the time of an incorrect auto-stop event. There should be at least a little chatter from the Fly Proxy about what it’s thinking, etc.
Hope this helps a little!
†This assumes that your app isn’t sensitive to the exact number of Machines. (Most apps that lack volumes are indifferent to that.)
‡Likewise, most apps can be spread out across regions without ill effects, but there are occasional exceptions.
I manually started the previously stopped Machines using fly m start, and after a bit of idle time, two of them did successfully auto-suspend.
However, after running fly deploy, two out of my six Machines remain in the stopped state every time. They just don’t start, so they couldn’t suspend after some time
Here are the logs from one of the Machines right after fly deploy:
2026-02-12T09:36:42Z runner[080736df09ee68] ord [info]Pulling container image registry.fly.io/<hidden>@sha256:<hidden>
2026-02-12T09:36:48Z runner[080736df09ee68] ord [info]Successfully prepared image registry.fly.io/<hidden>@sha256:<hidden> (5.847512765s)
2026-02-12T09:36:49Z runner[080736df09ee68] ord [info]Configuring firecracker
And then nothing else happens, the Machine stays in the stopped state.
When I start these same Machines manually afterward, they boot normally and later auto-suspend as expected.
So the issue only occurs during fly deploy, not during manual commands.
Let me know what additional logs or details would help.
Aha… This is normal. The auto_stop_machines setting governs only the Fly Proxy, which isn’t really the part making the decisions during deploy…
Stopped Machines stay stopped, and ones in the suspended state transition to stopped (which is the closest possible imitation of the senior stopped → stopped policy).
before deploy
after deploy
started
started
stopped
stopped
suspended
stopped*
*Not suspended.
Many users are vocally against all of their Machines getting started on every deploy. (They have gigantic fleets of Machines.)
A seemingly equal number of users is vocally of the opinion that obviously every Machine should be started at least once, .
The first group won out, since they see the most bad effects, basically.
Still, this is undeniably inconvenient for those who want the spare Machines to always be primed in suspend, for fastest possible auto-scaling…