Every engineering org eventually runs into this conversation. A production incident fires, connection errors start flooding the logs, and someone in the war room asks the uncomfortable question: whose problem is this?
The answer, like most things in distributed systems, is “it depends” β but in modern development, responsibility leans heavily toward the App Team, with the DB Team acting as a critical safety net. To understand why, it helps to think about it like a restaurant.
The Restaurant Analogy
Our App Team manages how many tables they seat guests at. Our DB Team manages the size of the dining room itself. We can have the most efficient waitstaff in the world, but if we’re trying to seat 500 people in a 200-seat restaurant, somebody’s eating on the sidewalk.
Connection pooling lives right at this intersection β and both sides need to understand what the other is doing, or the whole system breaks down.
The App Team: Primary Owners
Since connection pooling typically happens on the client side β through libraries like HikariCP, SQLAlchemy, or DBCP β the App Team holds the steering wheel. That means they’re responsible for three core things:
Configuration
Someone on the App Team set minPoolSize, maxPoolSize, and connectionTimeout. Whether those values were chosen thoughtfully or copy-pasted from a Stack Overflow answer from 2014 is a different story β but it’s their config either way.
Lifecycle Management
Connections need to be properly closed after use. Connection leaks almost always trace back to application code β a missing finally block, an uncaught exception that skips cleanup, a poorly written ORM interaction. That’s code-level responsibility.
Scaling Awareness
This is the one that catches teams off guard most often. If you have 10 application instances running and each has a maxPoolSize of 20, you are hitting your database with 200 simultaneous connections. Auto-scaling makes this worse. A traffic spike that spins up 50 pods can silently push you to 1,000 connections before anyone notices β long after the DB has started rejecting them.
The DB Team: The Governors
DBAs aren’t passive observers here. They set the rules of the dining room, and they have tools the App Team doesn’t.
Hard Limits
The max_connections parameter on your database isn’t a suggestion. Exceed it and the database starts refusing connections outright. The DB Team sets this ceiling based on the server’s available RAM and CPU β each connection consumes real memory, even idle ones.
Monitoring & Intervention
DBAs can see what the App Team can’t: idle connections that are sitting open and hogging resources, zombie connections left behind by crashed app instances, and session bloat that’s quietly degrading performance for everyone. When things get bad, they have the authority to kill sessions directly.
Server-Side Proxies
In serverless architectures β individual functions spin up and tear down constantly, each trying to open its own database connection. The math gets ugly fast. This is where the DB Team steps in to manage tools like PgBouncer, or Pgpool II at the infrastructure level, multiplexing thousands of short-lived app connections into a manageable pool on the database side.
Ownership at a Glance
| Concern | App Team | DB Team |
|---|---|---|
| Pool Sizing | Based on app throughput needs | Based on server RAM/CPU capacity |
| Connection Leaks | Fix the code | Alert, monitor, kill sessions |
| Query Efficiency | Optimize execution time | Optimize indexing, disk I/O |
| Tooling | Hikari, DBCP, SQLAlchemy | PgBouncer, ProxySQL, RDS Proxy |
The Real Answer: Shared Governance
The teams that handle this best don’t fight over ownership β they build a tight feedback loop.
The DB Team publishes a connection budget per service. Think of it as an SLA in reverse: here’s how many connections our service is allowed to hold, and here’s why. The App Team takes that budget seriously and tunes their pool configuration to stay within it, factoring in their actual traffic patterns, pod counts, and deployment topology.
Both teams then watch the same metric: connection wait time. This single number tells a clean story. If applications are waiting too long to acquire a connection, we have two possible culprits:
- The pool is sized too small β App Team fix
- The database is too slow to return connections β DB Team fix
Neither team can diagnose this in isolation. That’s the point.
The Takeaway
Stop framing connection pooling as a hand-off. It’s not a ticket we throw over the wall to the DBAs, and it’s not a black box that lives entirely in the application config. It’s a shared surface area that requires both teams to understand each other’s constraints.
The App Team needs to think like operators β aware of how their scaling decisions ripple down to the database layer. The DB Team needs to think like partners β providing clear limits and visibility instead of just enforcing hard stops after things break.
Get those two things right, and the “who owns it” debate mostly answers itself.
See this in action at PGConf India 2026 βΒ External Proxies and Poolers – A reality check in todays tech stackΒ presented byΒ Jobin Augustine.
