Hey I just logged in to my app and to my surprise the database was dead, no machine found alive under my postgres database. I recreated everything from the latest snapshot and to my surprise ONCE AGAIN the data was from 2 years ago, what can I do here? I’ve tried restoring it from multiple snapshot versions and all of them have data from years ago.
Is this an unmanaged Postgres cluster? How many nodes do you have in the cluster? What region is this in?
Consider adding here the output from this command:
flyctl volumes list
This should give you one volume, which has an ID. From there, issue this command, and send the information here (swapping the ID value with the volume ID from your first command):
flyctl volumes show <ID>
And also:
flyctl volumes snapshots list <ID>
(I recommend using code formatting in this forum, as I have done, to make things legible for readers).
This information will be helpful in working out how old your snapshots are. Snapshots are automatically configured, so it is possible you are looking in the wrong place.
yes, this is an unmanaged Postgres cluster, back when I created it there wasn’t such a thing as mpg, and I didn’t do the migration.
How many nodes do you have in the cluster?
I’m currently running a single node.
What region is this in?
This is in the gru region
I’ve tried creating databases from the snapshots available and all of them had the same data
➜ inventario-cabezudo-db flyctl volumes list
ID STATE NAME SIZE REGION ZONE ENCRYPTED ATTACHED VM CREATED AT
vol_remj588dwqpl3ml4 created pg_data 1GB gru 2236 true 148e3621c495d8 4 months ago
➜ inventario-cabezudo-db flyctl volumes show vol_remj588dwqpl3ml4
ID: vol_remj588dwqpl3ml4
Name: pg_data
App: inventario-cabezudo-db
Region: gru
Zone: 2236
Size GB: 1
Encrypted: true
Created at: 16 May 25 14:13 UTC
Snapshot retention: 5
Scheduled snapshots: true
➜ inventario-cabezudo-db flyctl volumes snapshots list vol_remj588dwqpl3ml4
Snapshots
ID STATUS SIZE CREATED AT RETENTION DAYS
vs_eYV2JV6Y382vTQyDy6J8wRP4 created 1073741824 31 minutes ago 5
vs_eYV2JV6Y382vTXkeBLOGPK3k created 1073741824 1 day ago 5
vs_eYV2JV6Y382vTXg00xbeMBXb created 1073741824 2 days ago 5
vs_yG1XJ1qGZnX6IGKRnwo1y created 1073741824 3 days ago 5
vs_yG1XJ1qGZnX6IGk9g1b27 created 1073741824 4 days ago 5
OK, much better; readers can work with this information.
I’d start off by saying that unmanaged Postgres, with a single node, and no external backups, is absolutely asking for trouble. It’s like believing that car seat-belts are a safety risk, and thus shouldn’t be worn; indeed, the Fly documentation explicitly warns against this kind of set-up. (Managed Postgres is just one solution here; having several machines in the cluster, and/or setting up external backups, have always been available to unmanaged Postgres users.)
My observation on your volumes is that I am not sure how you got data that is two years old in a volume you created five months ago. That just doesn’t make sense. Moreover, the snapshots are recent.
If I were in your shoes, I would grab at least one of those snapshots, and back it up to your computer. From a recent post in this forum I deduce that it is possible that snapshots won’t rotate out of existence where the volume host has died, but I would not take any chances.
Now we have had several folks recovering PG data from volumes where they were sure it was not possible, so that might apply in your case. I will post a link if I see the same post again, but do do a search. I’d also suggest looking through your dashboard in case you’re looking at the wrong app; given the discrepancy between the age of your data and the age of your volume, I wonder if you have more than one Postgres app, and you’re looking at the wrong volume (and thus the wrong snapshots).
If you dig into these techniques, e.g. via fly postgres create with a snapshot ID, perhaps you could post here how you get on. This is not my area of expertise, but a detailed account may catch the eye of someone who’s done it before, and they may weigh in.
after restoring try via console searching for specific info, i was not seeing any data in a similar situation but searching with find /data/postgresql -type f -print0 | xargs -0 fgrep -i myspecificdata as suggested in this post returned results and from there i could try some alternative snapshot restoring options and got my data visible again.