V2 auto upgrade volume issue

Hello, I’ve had an app running with no issues for many months now. I saw the warnings about the v2 auto migration but since I was using a volume for my sqlite database I thought I would be fine. I thought wrong apparently. My app restarted with a huge amount of data loss. I’m unsure what to do next.

My setup:

  • Single machine running a single volume with a sqlite db mapped. (I’m okay with some data loss during machine restarts).

I’ve try cloning the most recent snapshot of the volume but I’m unsure how to attach a specific volume to a machine. I tried following the instructions here: Can't mount volume to a machine - #5 by bbornsztein but that resulted in a non-managed app that I couldn’t fly deploy to update. Nevertheless, I was able to scp my database file from this cloned snapshot volume but the file was very small, with no updates for months.

Do I have a misunderstanding of how volumes are snapshotted? I assumed that since I have a single file.db on a volume that it would be snapshotted daily and I could roll back to it.

I know v2 is supposed to be better, but spending the afternoon debugging what is going on has been incredibly frustrating. Especially given that the past few months saw the app and hosting working perfectly.

Thanks for your help.

1 Like

Hmmm…anything on a volume should be cloned exactly as-is. Is wellworldapi the app you’re looking at?

Yes, that’s the correct one. Thanks for checking.

We also keep the old volume around unchanged, so you shouldn’t need backups.

Thanks. Maybe I have some misunderstanding on how sqlite works on a volume? I assume every transaction is written to the file. I validated this assumption by scping the file.db off of the machine every few days so I could do local queries against it and everything was there. Additionally, I have a web endpoint that shows me the latest top-level info from the db and it was up to date until the v2 migration reset it.

I’m afraid the application is storing the sqlite database on /app/data/stats.db path while the volume is mounted at /data/ directory.

# fly.toml snippet for wellwordapi app
destination = "/data/"
source = "wellwordapidata"

# inside wellwordapi machine
root@3d8d926ec1e789:/app# ls /data/
root@3d8d926ec1e789:/app# ls /app/data/

The app’s database was storing its sqlite data in the ephemeral root device of the virtual machine, not the volume.

Thanks dangra for investigating and root-causing. I’m guessing there is no way to recover the data as the vm was destroyed as part of the auto-migration?

FWIW, these docs (Add Volume Storage · Fly Docs) still read as confusing knowing what I now know:

The following configuration exposes data from a volume named myapp_data under the /data directory of the application.


Specifically under the /data directory of the application sounds like the app (in my case node.js) should be able to just read/write to ./data/stats.db

Additionally, I assumed that since my volume and snapshots file sizes were non-zero, that I was actually writing something to them.

For me to properly use volumes, should I change the destination to be /app/data or change my app code to read/write to ../data/stats.db? The former seems safer but I’m unsure of the order of when the app folder is created and when a volume is mounted and what happens if there is a conflict.

Thanks again dangra.


Yes, sorry about that but there is no way to recover the data from an ephemeral device after the VM restarts or it is removed like in this case.

Good point, I will look into making it more clear. thanks

As a general rule, it is always better to keep your app data in a different location than your app code. Code, once deployed, tend to be immutable while data (and the app state) is not.

I’d use /data and point your app there.

1 Like

Yes, sadly there’s no way to recover the data. I’ll look at the doc tomorrow, I agree the guidance could be clearer. In any case, I added some credits to your account to play with, so you can add all the volumes you need.

1 Like

Thanks nina and dangra. Appreciate the enlightenment and assistance.


I am not sure if this had any impact on the problem you are having but the tricky thing about the auto-upgrade is that it clones your volume and appends a _machines to the volume name.

I caught that and updated my .toml file, but if you didn’t, the upgraded app would point to the new cloned volume, until you redeployed your .toml file, at which point you’d now be pointing to the older volume, with old data, data that hasn’t changed since the upgrade.

Also, the old volume name now report 0MB used in the fly dashboard, which seems odd, as I did the upgrade recently, they should have about the same usage as the _machines version.

That’s a bit weird - did you migrate yourself or did that happen with the auto-migrate?

I think the 0MB is just because they aren’t currently attached.

The _machines addition happens when I run the migration process myself. It does not appear it happens when the automated process runs.

1 Like

@joshv what flyctl version are you running?

The addition of _machines suffix was removed from the migration command in flyctl v0.1.64 (released on July 21st).

I either did it before July 21st or had an older version.


This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.