Using PgBouncer directly on the Postgres server

Is it possible to run PgBouncer directly on a Postgres server? This will prevent the round-trip latency reducing the transaction rate in a single connection. If the app server has 1ms latency to the database, there is a limit to ~300 transactions per second, assuming 3 round-trips (begin, query, commit). A 20ms latency will limit to ~16 transactions per second. Using additional connections would increase the rate, but connecting directly to the database would increase memory usage and reduce efficiency.

Now that I think about it, a transaction with multiple queries would not be helped by PgBouncer. Is there any other solution? Even running the server and database in the same region has a ~1 ms latency.

You can fork our Postgres app and setup pg bouncer: GitHub - fly-apps/postgres-ha: Postgres + Stolon for HA clusters as Fly apps.

It might be simpler to run it as an extra app, though. Latency is far lower than 1ms in the same region.

1 Like

Hi @Yellow

If you’re looking to improve throughput of unrelated queries, more connections would be the solution as you can run the queries in parallel.

Databases are pretty good at handling multiple connections. Is there a particular reason you want to use a single connection?

Also, just to note that if your query is readonly then you don’t need to use begin and commit thereby eliminating those round trip overheads.

The main reason to reduce the amount of (actual) connections is to reduce memory usage. Assuming 15 MB per connection, 100 connections would use 1.5 GB of memory.

To add on, I’ve run some measuring on the server. Running a query takes around 2ms in total, if the server and database are in the same region.

On a slightly unrelated note, I’ve done some testing with multiple concurrent HTTP connections. With 500 concurrent HTTP connections, I’ve tested running the server with a limit of 1, 25, 100, and 200 database connections. It seems odd to me that 200 database connections only allows twice as many queries per second (measured by HTTP requests) compared to just 1 database connection.

That seems quite strange. Are you sure you aren’t hitting the hard limit in your http app? If it’s running on fly then theres a hard limit in the fly.toml file. Also check to make sure your code is able to utilize all 200 connections and that your database server can also handle that high number of queries per second across 200 connections.

When testing:

  • I saw 200 active connections using PgHero
  • The server can serve HTTP routes that do not involve database data (such as a Hello World route) without performance issues.
  • The server stalls on database-related routes.
  • I am able to manually connect to the database (using flyctl postgres connect) and run queries normally.

It seems odd to me that 200 database connections only allows twice as many queries per second (measured by HTTP requests) compared to just 1 database connection

Depending on the queries you’re running, you may need to consider any locks that may be getting acquired.

I had locks before, but I removed them. They costed time on the database which PgHero showed. However, with locks removed, PgHero shows 0ms for most queries, including the ones that I test. I still experience the limited throughport.

The primary testing query is SELECT * FROM <table> WHERE <...>.