Hello, I’ve had an app running with no issues for many months now. I saw the warnings about the v2 auto migration but since I was using a volume for my sqlite database I thought I would be fine. I thought wrong apparently. My app restarted with a huge amount of data loss. I’m unsure what to do next.
My setup:
Single machine running a single volume with a sqlite db mapped. (I’m okay with some data loss during machine restarts).
I’ve try cloning the most recent snapshot of the volume but I’m unsure how to attach a specific volume to a machine. I tried following the instructions here: Can't mount volume to a machine - #5 by bbornsztein but that resulted in a non-managed app that I couldn’t fly deploy to update. Nevertheless, I was able to scp my database file from this cloned snapshot volume but the file was very small, with no updates for months.
Do I have a misunderstanding of how volumes are snapshotted? I assumed that since I have a single file.db on a volume that it would be snapshotted daily and I could roll back to it.
I know v2 is supposed to be better, but spending the afternoon debugging what is going on has been incredibly frustrating. Especially given that the past few months saw the app and hosting working perfectly.
Thanks. Maybe I have some misunderstanding on how sqlite works on a volume? I assume every transaction is written to the file. I validated this assumption by scping the file.db off of the machine every few days so I could do local queries against it and everything was there. Additionally, I have a web endpoint that shows me the latest top-level info from the db and it was up to date until the v2 migration reset it.
Thanks dangra for investigating and root-causing. I’m guessing there is no way to recover the data as the vm was destroyed as part of the auto-migration?
The following configuration exposes data from a volume named myapp_data under the /data directory of the application.
[mounts]
source=“myapp_data”
destination=“/data”
Specifically under the /data directory of the application sounds like the app (in my case node.js) should be able to just read/write to ./data/stats.db
Additionally, I assumed that since my volume and snapshots file sizes were non-zero, that I was actually writing something to them.
For me to properly use volumes, should I change the destination to be /app/data or change my app code to read/write to ../data/stats.db? The former seems safer but I’m unsure of the order of when the app folder is created and when a volume is mounted and what happens if there is a conflict.
Yes, sorry about that but there is no way to recover the data from an ephemeral device after the VM restarts or it is removed like in this case.
Good point, I will look into making it more clear. thanks
As a general rule, it is always better to keep your app data in a different location than your app code. Code, once deployed, tend to be immutable while data (and the app state) is not.
Yes, sadly there’s no way to recover the data. I’ll look at the doc tomorrow, I agree the guidance could be clearer. In any case, I added some credits to your account to play with, so you can add all the volumes you need.
I am not sure if this had any impact on the problem you are having but the tricky thing about the auto-upgrade is that it clones your volume and appends a _machines to the volume name.
I caught that and updated my .toml file, but if you didn’t, the upgraded app would point to the new cloned volume, until you redeployed your .toml file, at which point you’d now be pointing to the older volume, with old data, data that hasn’t changed since the upgrade.
Also, the old volume name now report 0MB used in the fly dashboard, which seems odd, as I did the upgrade recently, they should have about the same usage as the _machines version.