PgDog vs. PgBouncer
May 5th, 2025
Lev Kokotov
I spent the last two weeks benchmarking and optimizing PgDog. My north star is PgBouncer, currently the oldest and most popular Postgres pooler. It’s reliable and delivers great performance. As of this writing, PgDog outperforms it, on average, by 10%.
Results
I used pgbench
and ran several benchmarks, changing the number of concurrent transactions and client connections. I compared PgDog, PgBouncer, and PgCat (my last attempt at scaling Postgres). Here are the results:
The left axis shows transactions per second (TPS). The bottom axis is the number of connections. PgDog is faster than PgCat across the board. It’s also faster than PgBouncer once we use more than 50 connections. The raw numbers:
Connections | PgDog | PgBouncer | PgCat |
---|---|---|---|
1 | 15,522 | 16,729 | 15,139 |
10 | 100,615 | 101,449 | 97,908 |
50 | 112,112 | 89,287 | 110,887 |
100 | 110,688 | 89,146 | 108,139 |
250 | 108,281 | 88,381 | 108,037 |
1,000 | 102,189 | 81,021 | 93,369 |
2,000 | 92,783 | 76,178 | 88,230 |
Configuration
Both PgDog and PgCat were configured to use 2 worker threads. PgBouncer is single-threaded. All three were compared apples-to-apples, with features like load balancing and query parsing disabled. I used only one database, with the pool size of 10 server connections, in transaction pooling mode:
pgdog.toml
[general]
default_pool_size = 10
workers = 2
pooler_mode = "transaction"
[[databases]]
name = "pgdog"
host = "127.0.0.1"
pgbouncer.ini
[pgbouncer]
default_pool_size = 10
max_client_conn = 10000
pool_mode = transaction
[databases]
pgdog = host=127.0.0.1
pgbench.sh
pgbench -c 50 -t 500000 --protocol simple -S -P 1
Observations
CPU utilization during most tests hovered around 75%. Even at high load, all three were bottlenecked by Postgres and I/O. Multi-threaded poolers were able to process more requests simultaneously. There is still some CPU-bound work involved, like reading bytes into buffers, calculating statistics and keeping track of client/server connections.
All three poolers are asynchronous and use epoll underneath to manage network sockets. PgDog and PgCat use Tokio, while PgBouncer uses libevent. With just a few connections, libevent is slightly faster. This was noticeable in the first two benchmarks (1 and 10 connections). Tokio still needs some optimizing, but with 2 threads and multiple connections, it performed well.
Increasing the number of threads only improved performance if the pool size was increased as well. This allowed PgDog to execute more queries concurrently without waiting.
Methodology
Postgres, all 3 poolers and pgbench were running on the same machine. I used my desktop with the following configuration:
- Arch Linux (latest kernel)
- AMD Ryzen 7 5800X, 8 CPUs, 16 threads
- 64 GB RAM
Postgres was able to keep all data in page cache and shared buffers. This ensured nothing outside of our control affected performance. We are testing the poolers, not Postgres, disks or the kernel.
Since all 3 components were running on localhost
, network latency wasn’t a factor. pgbench
was using the -S
(SELECT only) option to avoid performance variability caused by writing data, which could have obscured subtle differences between poolers.
If you’re trying to reproduce this, make sure to run your benchmark on Linux. Tokio is much faster with epoll than kqueue (Mac OS, other Unixes) or Windows. Configuration files I used are in the repository.
Next steps
PgDog is an open source project for scaling PostgreSQL. We are just getting started. Even with exciting new features, like sharding, performance will always be top priority.
If you’re interested in adopting PgDog, get in touch. Any questions or feedback, join our Discord. A GitHub star is always appreciated.