- Your app is running only on a single Fly VM. There are many reasons why it might be the case that you’re not using multiple VMs; a common one being that you just moved over from another provider to Fly, and your app is not (yet) ready to work on multiple VMs side-by-side.
max per regionis not set. No specific release strategy is set.
- Your app does not use any volumes.
- You run
fly scale memory <some amount>
A new instance of the app is started in the background.
Once its health checks are OK, traffic is no longer forwarded to the old instance but forwarded to the new instance instead.
Only now is the old instance stopped.
In other words: I expect a ‘bluegreen’/‘canary’-style deploy (with only a single VM, I believe these strategies are effectively the same).
The current (old) instance is immediately instructed to stop.
In other words, Fly opts to do a ‘rolling’/‘immediate’-style (with only a single VM, I believe these strategies are effectively the same). deploy instead.
This results in downtime. The downtime is more noticeable if the new VM takes a while to start up.
The weird thing here, is that the situation is very different from what happens when you have 2 or more VMs running. In that case, Fly will opt for a nice ‘canary’-style release where the existing VMs are only stopped once the new one is ready.
But when you only have a single machine, Fly will immediately stop the single currently running VM.
Is this intentional behaviour, or is this a bug?
It seems to me like especially people new to Fly can easily burn themselves on this and cause unintended downtime for their freshly-migrated-to-fly apps.