Postgres connection pooling: the problem you do not know you have until you have 50 concurrent users
Our Node.js API opened a new PostgreSQL connection per request. At 50 concurrent users we exhausted the connection limit and the whole system halted. PgBouncer fixed it.
In early March 2025, we onboarded three new clients simultaneously. Each client had between 10 and 20 users accessing the dashboard during business hours. Within two days of going live, our API started returning 500 errors at 10 AM when all three client teams logged in for their morning reconciliation review. The error logs showed a message I had seen before but never in production: “too many connections for role.” Our PostgreSQL instance had hit its maximum connection limit.
By this phase the work was no longer “just build it.” It also builds on what I learned earlier in “When the sales team demos a feature that does not exist yet and engineering finds out from a prospect.” Every architecture choice had a people cost, an audit cost, and a recovery cost when production disagreed with the plan. That is roughly when the line between FinanceOps systems and projects like flowscape or ftryos got interesting to me: the design only counts if the operators can live with it.
How We Got Here
Our Node.js API used the standard pg library with a connection pool configured at the default settings. The default pool size in pg is 10 connections. That sounds fine until you realize that each API server instance gets its own pool, and we were running four instances behind a load balancer. Four instances times 10 connections equals 40 connections to PostgreSQL at maximum concurrency.
PostgreSQL is configured with a maximum of 100 connections by default. We had not changed this. With 40 connections reserved for the API, plus connections from our reconciliation worker, our reporting service, PgAdmin for database administration, and the ArgoCD health checks, we were already at about 60 connections at idle. When 50 users hit the dashboard simultaneously and triggered multiple API calls per page load, the pool exhausted within seconds and requests started queuing, then timing out, then failing.
The fix was not simply increasing the max_connections setting on PostgreSQL. Each connection consumes memory, approximately 5 to 10 MB per connection. At 100 connections, that is half a gigabyte just for connection overhead. Doubling it to 200 would consume a gigabyte, and the performance of PostgreSQL degrades as connection count increases because each connection is a separate process that competes for CPU and memory with every other connection.
Enter PgBouncer
PgBouncer is a lightweight connection pooler that sits between your application and PostgreSQL. Instead of each application instance maintaining its own pool of real PostgreSQL connections, they connect to PgBouncer, which maintains a smaller pool of real connections and multiplexes application requests across them.
[databases]financeops = host=127.0.0.1 port=5432 dbname=financeops
[pgbouncer]listen_port = 6432listen_addr = 0.0.0.0auth_type = md5auth_file = /etc/pgbouncer/userlist.txtpool_mode = transactionmax_client_conn = 400default_pool_size = 25min_pool_size = 5reserve_pool_size = 5reserve_pool_timeout = 3The key setting is pool_mode = transaction. In transaction mode, a real PostgreSQL connection is assigned to a client only for the duration of a transaction. Between transactions, the connection is returned to the pool and can be used by another client. This means 400 application connections can share 25 real PostgreSQL connections, because most of the time an application connection is idle, waiting for the next HTTP request.
The Gotchas
Transaction pooling is not a drop-in replacement. It changes the behavior of PostgreSQL sessions in ways that can break your application if you are not careful.
- Session-level settings like SET search_path or SET timezone are lost between transactions because the underlying connection changes
- LISTEN/NOTIFY does not work because notifications are tied to a specific connection that may be reassigned
- Prepared statements in the default configuration are not supported because they are session-scoped
- Advisory locks must be acquired and released within a single transaction
- Connection-level authentication like SSL client certificates requires additional PgBouncer configuration
We hit the prepared statement issue first. Our ORM, Prisma, uses prepared statements by default for query performance. When PgBouncer reassigned the underlying connection between transactions, the prepared statement was gone and queries failed. The fix was adding the connection parameter pgbouncer=true to the connection string, which tells Prisma to use simple protocol instead of prepared statements.
# Before: direct PostgreSQL connectionDATABASE_URL="postgresql://user:pass@localhost:5432/financeops"
# After: PgBouncer with simple protocolDATABASE_URL="postgresql://user:pass@localhost:6432/financeops?pgbouncer=true"Capacity Planning
After deploying PgBouncer, I spent a day doing the capacity math I should have done before we hit the wall.
- Each dashboard page load triggers approximately 8 API calls
- Average API call holds a database connection for 15 milliseconds
- At 50 concurrent users with staggered page loads, peak concurrent database demand is approximately 12 active connections
- 25 pooled connections gives us headroom for 100 concurrent users with the current query profile
- The reconciliation worker uses a dedicated pool of 5 connections that are not shared with API traffic
The dedicated pool for the reconciliation worker was important. Before PgBouncer, our reconciliation batch jobs and API requests competed for the same connections. During peak reconciliation, API latency would spike because long-running batch queries held connections that API requests needed. With PgBouncer, we separated them into different pools with independent connection limits.
Lessons for Growing Applications
Operator mode means you inherit every downstream consequence. The code path is only half the story; the other half is how the decision warps planning, trust, and execution speed. I kept relearning that lesson while building ftryos and pipeline-sdk.
Connection pooling is not something you need to think about until you do, and when you do, you needed it yesterday. If your application talks to PostgreSQL and you do not have a connection pooler in front of it, add one now. The cost is an hour of setup. The cost of not having it is a production outage during your busiest onboarding week.
The default pg pool settings are designed for getting started, not for production. The default PostgreSQL max_connections is designed for a single application, not for a distributed system with multiple services. These defaults are fine until they are not, and the transition happens suddenly when your concurrent user count crosses a threshold. PgBouncer turned a 100-connection PostgreSQL instance into a system that comfortably serves 400 application connections. The total deployment time was 90 minutes including the Prisma configuration change. We should have done it six months earlier.