Oh, this is nice. So this failkeeper command will basically reevaluate the entire cluster and re-elect a primary, and if I’ve changed the PRIMARY_REGION it will respect that and set up thr primary in the new region? That sounds great.
And forking is a nice thing to have but not strictly necessary, can always pg_dump and restore and streamline that with a snapshots when possible.
Oh, this is nice. So this failkeeper command will basically reevaluate the entire cluster and re-elect a primary, and if I’ve changed the PRIMARY_REGION it will respect that and set up thr primary in the new region? That sounds great.
I’m wondering if there has been any discussion about providing ‘official’ images with postgis bundled into them, all it has taken me so far is to replace the postgres base image with the postgis base image, however now I’m running a ‘custom’ image, so I don’t get upgrades etc.
We have replaced the Stolon proxy with HAProxy. This should provide a nice stability improvement, as it allows us to control routing through the use of local health checks rather than having to hit our remote Consul cluster for state information. This was causing quite a few issues last week when our Consul cluster was under heavy load.
We’ve made some nice improvements and bug fixes to our process manager. Now, in the event a particular process crashes, we will attempt to restart/recover that individual process before letting it crash and reschedule the entire VM.
Image version tracking and updates have been made available to those of you running our official postgres-ha images.
$ flyctl image show --app md-postgres-6
Update available flyio/postgres:12.8 v0.0.4 -> flyio/postgres:12.8 v0.0.5
Run `fly image update` to migrate to the latest image version.
Image details
Registry = registry-1.docker.io
Repository = flyio/postgres
Tag = 12.8
Version = v0.0.4
Digest = sha256:bc5aa30e3b6267fe885d350c6e7e5857a03f09ec05b290f328fd32b69ddd0eb1
Latest image details
Registry = registry-1.docker.io
Repository = flyio/postgres
Tag = 12.8
Version = v0.0.5
Digest = sha256:bd0d6f8c5067ebe38b8d9c28706e320c296c610ab39cee274e102b6599d7bc7c
$ flyctl image update --app md-postgres-6
? Update `md-postgres-6` from flyio/postgres:12.8 v0.0.4 to flyio/postgres:12.8 v0.0.5? Yes
Release v55 created
You can detach the terminal anytime without stopping the update
Monitoring Deployment
3 desired, 3 placed, 3 healthy, 0 unhealthy [health checks: 9 total, 9 passing]
--> v55 deployed successfully
Note: The image update process will perform a rolling restart against your Postgres cluster. Connected services may experience some brief disruptions while the update is being deployed.
If you have any feedback, questions, concerns, etc. Let us know!
Just a quick check on this, this is the PRIMARY_REGION on the Postgres app, right? The PRIMARY_REGION on the application the DB is attached to also needs to be updated, but Fly won’t do that automatically, right?
Just a quick check on this, this is the PRIMARY_REGION on the Postgres app, right? The PRIMARY_REGION on the application the DB is attached to also needs to be updated, but Fly won’t do that automatically, right?
I’m not exactly sure what you mean here. The PRIMARY_REGION defines which keepers are eligible for leadership. The only time this would need to change is if you’re looking to push your leader into a new region.
And with the move to HAProxy is stolon failkeeper still in charge of electing the primary?
Everything will remain the same on that front. Stolon’s Sentinel is the component that’s in charge of leadership elections and the proxy is just there to control routing.
Yeah, it is a little confusing. The PRIMARY_REGION on the Fly app that houses Postgres (let’s call it pg1) determines the primary. When attaching this to a Fly application (let’s call it app1), a PRIMARY_REGION is also created on app1 (along with DATABASE_URL) to denote which region has the database primary, and used for write requests and the replay header etc. Just confirming that updating the PRIMARY_REGION on the pg1 app does not automatically update it on the app1 app.
Everything will remain the same on that front. Stolon’s Sentinel is the component that’s in charge of leadership elections and the proxy is just there to control routing.
Thanks. Does that mean that any region that is made the PRIMARY_REGIONmust have 2 or more volumes and instances? What would happen if I change the PRIMARY_REGION to a region that has only one volume / instance and run stolon failkeeper?
Have there been any discussions of using GitHub - wal-g/wal-g: Archival and Restoration for Postgres with Postgres to do single instance PG for those who want it? Given the option of a single instance with continuous backups (and single command restoration from those backups) I know I’d choose that in a flash over the current Postgres+Stolon+HAProxy+Consul setup. It’s just much simpler, easier to understand and work with and gives clients full control over what’s going on.
The default is the high availability setup, which needs at least 2 instances and two volumes running. That can be cut down manually, though if it’s for a dev deployment, I think it should still work fine.
I imagine I should remove the one that is not attched?
ID NAME SIZE REGION ATTACHED VM CREATED AT
vol_x915grnweo0rn70q pg_data 10GB dfw 8d0b35b7 2 hours ago
vol_ke628r6gwzjvwmnp pg_data 10GB dfw 2 hours ago