Volume Restores are now available!

Hey everyone,

As of flyctl version 0.0.363, individual volume restores can now be performed by specifying --snapshot-id at creation time!

$ fly volumes create
Usage:
  flyctl volumes create <volumename> [flags]

Flags:
  -a, --app string            Application name
  -c, --config string         Path to application configuration file
  -h, --help                  help for create
      --no-encryption         Do not encrypt the volume contents
  -r, --region string         The region to operate on
      --require-unique-zone   Require volume to be placed in separate hardware zone from existing volumes (default true)
  -s, --size int              Size of volume in gigabytes (default 3)
      --snapshot-id string    Create volume from a specified snapshot

Global Flags:
  -t, --access-token string   Fly API Access Token
  -j, --json                  json output
      --verbose               verbose output

Example:

  1. List the volumes associated with your App.
$ fly volumes list --app shaun-testapp
ID                  	STATE  	NAME      	SIZE	REGION	ZONE	ATTACHED VM	CREATED AT
vol_2n0l9vl2l9pr635d	created	myapp_data	2GB 	ord   	31d7	1910de41   	5 months ago
  1. View the snapshots associated with your target volume.
$ fly volumes snapshots list vol_2n0l9vl2l9pr635d
Snapshots
ID                 	SIZE    	CREATED AT
vs_lN9XjgNwLOz5PUV5	72161343	22 hours ago
vs_3AnNvAnKgVX2ocwk	72161343	1 day ago
vs_NGZK4gBVjMqyS0M 	72161343	2 days ago
vs_v59R7g9blbgq3IqP	72161343	3 days ago
vs_3Kal1j7Pky5XBsPK	72161343	4 days ago
vs_ejoa0DXqyelv7s8J	72161343	5 days ago
  1. Restore the volume into your target app
fly volumes create myapp_data --snapshot-id vs_lN9XjgNwLOz5PUV5 --app shauns-target-app 

That’s it!

If you have any questions or encounter any problems with this feature, just let us know!

11 Likes

Hi @shaun , thanks for the post and update. We were playing around with this feature today with the hopes of solving a long running problem of ours, which is seeding new preview environments from a production database snapshot.

We create preview environments for PRs via Github Actions, and were able to successfully get the corresponding Postgres cluster to be created from our prod database snapshots via some awk scripting to grab snapshot ids dynamically. However, we’ve hit a wall when it comes to mapping the original prod database name to match up with the new database name needed when connecting a new app (that has a different name and therefore gets its own database made on connection.) Is there any way currently to remap database names, copy to a new database with a different name or something similar with this feature that would allow us to do this?

Apologies if this is out of scope of this thread, as I realize this might be a separate concern than a pure snapshot. I can move this to its own post if this is distracting from the main focus of this post. We’re just excited about this feature and hope it will get us closer to where we’d like to be! The ability to use this for seeding arbitrary preview environments would be awesome so just want to see if we’re missing anything.

2 Likes

I’m not @shaun but I think this is super interesting.

If I’m reading you right, you are trying to attach your PR app to the DB from the restore. Like app_name_1111111 in the postgres cluster.

You can try running:

fly pg attach -a <staging-app> --postgres-app <staging-db> --database-name <prod-pg-db>

I think this will create you a new user on the production DB and set the right DATABASE_URL secret on the staging app.

Does that make sense?

Thanks for the help, not sure how I missed that option. I am optimistic that this is going to work, just need to overcome one final hurdle which is piping a y into the script in an automated environment. Because it’s attaching to an existing database, flyctl gives the prompt

? Database "database_name" already exists. Continue with the attachment process? (y/N)

I’ve tried piping in an echo y, printf 'y\n', and the yes command. The first two give me an EOF error, and the yes attempt gave me an exit code 3. Any ideas? I think we are close here. (This one could just be me making a basic oversight, I’ve been at this for a while today :sweat_smile:)

Ok we just shipped a pre release CLI that adds a --force option to fly pg attach. Can you give that a try?

Install it with:

curl -L https://fly.io/install.sh | sh -s pre

Gave this a go, I’m getting an unknown flag: --force error when trying to run fly pg attach. I also verified that my local fly version is v0.0.364-pre-7, which I think matches up with the right tag from the Github repo.

Well then … let me see what I’ve screwed up.

Turns out, we have two attach commands. Try installing pre-8 and see how that works?

Ok, that gets us past the unknown flag error! Now I’m hitting an Oops, something went wrong! Could you try that again? message, and it’s exiting with code 3 when I try it in our build pipeline. I’m not sure if there’s a better way to get some more diagnostic info as to why it’s failing now. (Thanks for the continued troubleshooting help on this by the way)

@kurt Great news, as of flyctl v0.369 this works!!! We have a full preview env running through Github Actions that is seeded from our latest production snapshot! For any interested, here’s the snippet we used to grab the correct snapshot id dynamically (warning in advance this is a bit of a hack, any suggestions for cleaning this up welcome):

flyctl postgres create ... 
# put your options here...
--snapshot-id
          $(flyctl volumes snapshots list $(flyctl volumes list -a your-prod-db-app |
          awk 'FNR == 2 {print $1}') | awk 'FNR == 3 {print $1}')"

Followed by:

flyctl postgres attach ...
# your options here...
--database-name your-database-name-in-db-snapshot --force #--force only needed in an automated setting

Thanks again for all of the work making this possible!

3 Likes

Is it possible to download the created volume?

Don’t think so:

@michal As workaround, how about creating a temporary app, restoring the snapshot with that app as the target app, then using scp/rsync?

Nice. Another one: backup files with restic straight from prod, Is there backups for persistent volumes - #6 by ignoramous