Posted by Craig Kerstiens
Many of our customers have recently asked about our connection limit settings on our new Heroku Postgres tiers. Previously we allowed for 500 connections across all production databases, however now there is some variance in the number of connections allowed with only the larger plans offering 500. In individual conversations with customers we’ve detailed the reasoning behind this, and feel its worth sharing this more broadly here now.
For some initial background, our connection limit updates are actually aimed to be an improvement for anyone running a Heroku Postgres database, by both providing some guidelines as well as setting some expectations around what a database instance is capable of. What we’ve observed from running the largest fleet of Postgres in the world (over 750k databases) and heavily engaging with the Postgres community is there are two actual physical considerations in Postgres itself when it comes to the number of connections. Setting a high limit has performance impact under normal operations, even without establishing all available slots. The situation worsens when many connections are established, even if they’re mostly idle. By setting extremely high limits, when you encounter an issue it is masked by a much more vague error, thus making troubleshooting more painful.
The first limitation of PostgreSQL itself is that it doesn’t scale to a large number of connections. The Postgres community and large users of Postgres do not encourage running at anywhere close to 500 connections or above. To get a bit more technical, the size of various data structures in postgres, such as the lock table and the procarray, are proportional to the max number of connections. These structures must be scanned by Postgres frequently.
The second limitation is that each connection is essentially a process fork with a resident memory allocation of roughly 10 MB, along with some query load. We give some slack to this in our 60 connection limit on Yanari and a memory size of 400 MB. The previous experience was that as a user when your connections increased you would start receiving "out of memory” errors within Postgres. These “out of memory” errors can occur for any number of reasons, and having too many connections are just one cause, but by far the most common one. Instead of forcing you to evaluate all possible causes of the error, we want to make it clearer and simpler when you hit a limitation around connections. Now when you hit our connection limit you’ll receive an alert with clear guidance on these details so that you may reduce your connection usage or scale up to a larger instance.
At the same time, we understand that you may want to have a total of more than 500 connections to your database for any number of valid reasons. When that many processes need access to the database, a production grade connection pooler is the correct approach. For that reason, a Heroku Postgres staff member created the pgbouncer build pack, which places what's considered the best Postgres connection pooler right on your dynos. We’re continuing to productize that further though happy to work with customers today if they’ve run into that limitation.
Our goal with these new connection limits are to make it easier for you to do the right thing for your database. As always we welcome your feedback around this or other product areas at email@example.com.