Improved Postgres Clustering with repmgr - Preview

What’s this about?

Over the last year, we have been seeing more and more Postgres apps go down due to unstable connections with our multi-tenant Consul service. Stolon, the open-source solution that we have been using for HA management requires an always-stable connection with our Consul. The issue is when that connection becomes unstable, PG’s start becoming unaccessible. We have been pretty disappointed with how this has impacting our users and decided it was time to try a new approach.

This brings us to our next upcoming iteration of Postgres, which runs EDB’s repmgr at its core. Repmgr is super lightweight and offers a suite of tools for managing replication and failovers. What it doesn’t offer, however, is any real strong opinions on how a cluster should be managed. This does offer some appeal, as it allow us to really dial-in on how we feel a Postgres cluster should be managed. This is still very much a work-in-progress, but we feel we are at a point where we’d like to start getting feedback.

The project can be found here: GitHub - fly-apps/postgres-flex: Postgres HA setup using repmgr

Major Changes

Consul usage has been significantly reduced

Active Postgres clusters will no longer see interruptions in the event of a Consul outage!

That being said, there are a few things that will still be impacted:

  • Horizontal scaling
  • Configuration updates made via fly pg config update

This is still annoying, but a pretty significant improvement from a stability standpoint.

### PGBouncer is part of the base setup

PGBouncer has been a pretty common feature request and is now part of the the base topology. Configuration options are pretty limited at the moment, but these should be available soon!

Quorum requirements

Unlike our Stolon implementation, quorum must be met in order to achieve HA. Basically, you should plan on running at least 3 members if you want any sort of HA guarantees. We are looking to support 2 + 1 setups in the near future, which will allow you to run 2 standard members plus a lightweight “witness” member that’ll just be there to meet quorum requirements and protect against split-brain.

In the event that quorum cannot be met, the cluster will go readonly.

What Features are missing?

  1. The fly pg failover is not supported quite yet, but should be available soon.
    2. We have not yet added support for Postgres extensions. E.G. PostGIS, TimescaleDB, etc.

Getting started

Make sure you are running the latest version of flyctl or at least version v0.0.455.

Specify the --flex flag when provisioning your next Postgres app to test out the new implementation:

flyctl pg create --name <app-name> --flex

Warning: This should not be used in production quite yet.


If you have any questions or feedback, please let us know! We’d love to hear from you!


If you encounter any issues, please let us know! You can reply in this thread, or submit an issue here: GitHub - fly-apps/postgres-flex: Postgres HA setup using repmgr


Hey everyone,

We have a few big updates to report:

  • As of flyctl version v0.0.463, our repmgr-based implementation has been enabled as the new default provisioning option.

  • New provisions will run the latest version of Postgres 15.

  • We have decided to remove PGBouncer from the base setup and will look into providing this as an optional add-on in the future.

Noteworthy changes you should be aware of

  • TimescaleDB is no longer included in our base image. If you would like to continue using TimescaleDB, you will need to target a specific image:
fly pg create  --image-ref flyio/postgres-flex-timescaledb:15
  • WAL-G is not included in the base image at this time. If this is a deal-breaker for you, you can continue leveraging our Stolon-based implementation until we something in-place for you.
fly pg create --stolon ...

That’s it!

If you have any questions or run into any issues, please let us know!


Thanks @shaun
Will there be an upgrade path other than manually creating a new db app and doing data dump/restore?

1 Like

Hey @Elder,

Which upgrade path are you referring to?

Are there plans to add WAL-G to the base image?

I’m considering migrating my current stolon based pg cluster to the new stack. I’m having issues lately with connectivity to consul-fra from ams region which results in random db connection errors, and experienced a short downtime last week due to dns issues.

So I wonder if any automatic migration is planned or manual dump/restore is the only path

1 Like

We are currently evaluating whether to stick with WAL-G or work to implement something like PG Barman. This is high on our priority list, so we should have more information about this soon.


@shaun What was the rationale for removing PGBouncer? Must we build the app ourselves to get it?

As of right now, the only migration path available would be through a manual dump/restore. Automating the dump/restore process is certainly on our radar, not only for this transition but also for major version upgrades. I can’t say for sure when this will happen, but I hope sooner than later.

1 Like

There’s no standard PGBouncer configuration that works well across the board, so it came down to whether we thought we could provide enough guidance and support to justify including it in our base offering. Ultimately, we thought that reducing the number of knobs that a typical user has to care about was the right decision.

Must we build the app ourselves to get it?

We should be able to provide more guidance on this soon.

1 Like

@shaun Appreciate this. It seems this is an effort to arrive at the least common denominator for least support for users to run without guardrails. For a performant production database, I can’t imagine that an offering without connection pooling will be that helpful. Are you considering running a pooling and cache service on top, analogous to Prisma Accelerate to plug into?

1 Like

In the short term we will be offering pgbouncer as an addon that can be enabled. We haven’t quite worked out how this will work yet but we will soon.

Is it production ready?

Hey @shaun .

If I manually migrate to this new implementation will I receive any future improvements without the need of another migration?

1 Like


Future improvements within the implementation are exposed through image upgrades. E.G. fly image show/update.


Ok I’ve migrated my production PG to the new implementation.

Selected a 3 node cluster like @jerome instructed on another thread to get HA.



hey @shaun, is this still relevant?

What exactly is impacted with horizontal scaling?

Configuration updates - does it mean I shouldn’t use on pg flex
fly postgres config update -a app-db --shared-preload-libraries pg_stat_statements ?

@Elder What I mean’t by that was that those features leverage Consul and will not work in the event Consul is down.

I also just realized that updating shared-preload-libraries is pretty sensitive to formatting. We will work to make this less error prone, but in the meantime it’s important that you don’t blow away the repmgr library.

You will want to run the command like the following:

fly pg config update -a app-db --shared-preload-libraries "'repmgr,pg_stat_statements'"

Hey @shaun!

On the Metrics page Replication Lag and Database Size are empty. Is it how it’s supposed to be for a postgres-flex cluster?

Nope! I’ll take a look and see what’s going on there. Thanks for reporting the issue!

1 Like