Hello, we want to create a staging environment for our server. Is it possible to make a clone of our production database and its volume to the staging environment? Could this be automated somehow ?
This is an often requested feature on Early look: PostgreSQL on Fly. We want your opinions. — but as of right now there isn’t a way to automate this yet.
The usual method of pg_dump and pg_restore will of course work, and the GitHub - fly-apps/postgres-migrator: Fly app that works to streamline Postgres migrations. app makes this super-easy.
Thank you @sudhir.j the app you recommend seems very useful and is more or less what I wanted…
What about the volume cloning then? Is it possible? I presume it won’t be so easy because each volume can be mounted by 1 app so we can’t create an app that would mount the two volumes and copy from the one to the other …
Yeah, volume cloning and multi-volume mounting isn’t supported right now, so going via the application (in this case Postgres) is probably the best way to get this done.
Probably I didn’t make myself clear enough… I don’t want to clone the postgres files. I’ve mounted a volume in my app that holds various files that the users are uploading. I want to clone this particular volume to my staging environment. Is this possible somehow ?
Ah, yes. That makes sense and it is a little more complex. I can’t think of any built in features on Fly to help with this, but I’ve done and seen it done in the following ways:
- set up these directories as buckets on AWS S3 mounted as file system folders using s3fs
- rsync based sync between two machines - you should be able to address and communicate between machines on the internal Fly network.
I would personally suggest the s3fs method, because that allows you to accept uploads on multiple machines (which will happen if you use the default Postgres - two VMs are provisioned for high availability), and keeps you from having to grow volumes as well.
@sudhir.j thank you very much for the information! You were very helpful!
Kind regards,
Serafeim
Hello @sudhir.j, following this topic I was able to backup / restore my database using pg_dump and friends. However I had difficulties cloning my volume. I don’t have access to S3 right now so I tried to copy the files from one server to the other using scp or sftp; it didn’t work. Actually, I couldn’t ssh from one server to the other because the ssh access was denied between the servers!
I mean I could ping or try to conect from one server to the other using their internal addresses however when I tried sshing from the one to the other I got access denied. How could I configure this access? Should I do it manually (i.e generate a key pair to one server and add the public key of that pair to the other server’s authorized_keys) or is there a better way?
Kind regards,
Serafeim
How could I configure this access? Should I do it manually (i.e generate a key pair to one server and add the public key of that pair to the other server’s authorized_keys) or is there a better way?
Yeah, this is the only way I can think of right now. If you want an easier process after the setup maybe rsync
can help, but I can’t think of a way to avoid the setup right now.
You do have a Consul instance available to you (a key value store accessible to all your VMs), so if you’re interested in scripting a system where each time an instance starts it generates a key and puts it on Consul along with its private IP, you could then look up the list of other VMs from any VM, with their keys and use that to set up a link.
Thank you @sudhir.j for the information!
Kind regards,
Serafeim
Hey Sudhir, I’m trying to use this github repo to help me migrate some data but I keep getting the following error:
pg_dump: error: Dumping the contents of table "X" failed: PQgetCopyData() failed.
pg_dump: error: Error message from server: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Any idea how do I ensure that the server does not close the connection?
This seems like a connectivity issue at first glance. Are other operations on the source DB succeeding? Both read and write?