Volume snapshot visibility and speed

I have a ~450GB volume, and generally want to make riskier changes after the daily snapshot is completed.

However, the snapshot ends up taking many hours to build, with no visibility into the progress or state.

Is it possible to have some kind of indication of what stage the snapshot process is in, or if it is stuck/failing?

Are there any indicative snapshot building durations for volumes of different sizes?

Is it safe to assume that once a snapshot shows the status running, the data has been captured and will be eventually available for restoration, even if the volume’s data changes (essentially, similar to a ZFS CoW situation)

2 Likes

I decided to try and migrate off fly.io, but ran into another issue - per the documentation, only one volume can be attached to a machine.

I was originally planning to use sqlite’s vacuum into to create a backup that is internally consistent, then copy that off while the rest of the system runs off the primary db.

Forking isn’t an option - I tried that first, but the resulting DB seems to be in a corrupted state - maybe if I kill the machine entirely and then fork (which takes upwards of half an hour), it would result in a valid byte for byte copy, but I’m trying to avoid downtime here.

Does anyone have any experience with doing a zero downtime migration off fly.io and successfully copying ~450 GB of data?

This also makes me somewhat doubt the viability of the automated snapshots - if fork isn’t usable, then I suspect the snapshots might be unusable too, but I haven’t validated this yet.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.