I have an app which uses bolt with single replica. I’ve noticed that in the last few deploys, sometimes the new deployment receives volume without the database left from previous deployment. Then, on next redeploy, the volume with data comes back.
I’m only using 1 volume for 1 app with 1 replica. Is there anything that I’m perhaps doing very wrongly?
When you say “single replica”, do you mean you’re running a primary app + volume with a second app + volume?
What you’re describing sounds like what happens if you have more volumes than app processes. Can you make sure fly volumes list only shows the number of volumes you expect?
When you say “single replica”, do you mean you’re running a primary app + volume with a second app + volume?
I’m only running 1 instance of the app in a single region, there is no second instance – perhaps that’s better wording? I didn’t specify any auto-scaling behavior in the settings.
Can you make sure fly volumes list only shows the number of volumes you expect?
Yup, they have the volumes I expect (one volume) and the snapshots are non-empty (I hope?).
❯ flyctl volumes list
ID NAME SIZE REGION ZONE ATTACHED VM CREATED AT
vol_x915grnydkp4n70q sortie_apiserver 1GB sjc 0ad1 ecf02bd4 2 days ago
❯ flyctl volumes snapshots list vol_x915grnydkp4n70q
Snapshots
ID SIZE CREATED AT
vs_apo3eJ3wlz9PRSnv 1171473 17 hours ago
vs_B4P8QoB7pa2AMUvD 1171473 1 day ago
Is there perhaps a way to reload volume from snapshot? I’ve only found the instructions for postgres, but this isn’t a postgres app.
Ok, I looked at your app and one of the volumes you previously deleted it still hanging out. So what’s happening is that your app is booting on two different volumes.
Does the volume it’s running on have the data you need? Or is it the other one? We can restore that other one pretty easily.
There’s no automated way to restore snapshots yet (I don’t think!). It’s built but not shipped.
So what’s happening is that your app is booting on two different volumes.
Ah interesting, so it’s just flip-flopping between the two? Is it a race condition between deploy and the volume deletion?
I’ve manually entered the data to the current one, so I’ll keep the current one, though if it’s possible to append the snapshot history to the current volume that’d be great, otherwise no big deal.
Anything I should be aware of to prevent this from happening again?