|||

Video Transcript

X

More Predictable Shared Dyno Performance

In this post, we’d like to share an example of the kind of behind-the-scenes work that the Heroku team does to continuously improve the platform based on customer feedback.

The Heroku Common Runtime is one of the best parts of Heroku. It’s the modern embodiment of the principle of computing resource time-sharing pioneered by John McCarthy and later by UNIX, which evolved into the underpinnings of much of modern-day cloud computing. Because Common Runtime resources are safely shared between customers, we can offer dynos very efficiently, participate in the GitHub Student Program, and run the Heroku Open Source Credit Program.

We previously allowed individual dynos to burst their CPU use relatively freely as long as capacity was available. This is in the spirit of time-sharing and improves overall resource utilization by allowing some dynos to burst while others are dormant or waiting on I/O.

Liberal bursting has worked well over the years and most customers got excellent CPU performance at a fair price. Some customers using shared dynos occasionally reported degraded performance, however, typically due to “noisy neighbors”: other dynos on the same instance that, because of misconfiguration or malice, used much more than their fair share of the shared resources. This would manifest as random spikes in request response times or even H12 timeouts.

To help address the problem of noisy neighbors, over the past year Heroku has quietly rolled out improved resource isolation for shared dyno types to ensure more stable and predictable access to CPU resources. Dynos can still burst CPU use, but not as much as before. While less flexible, this will mean fairer and more predictable access to the shared resources backing eco, basic, standard-1X, and standard-2X Dynos. We’re not changing how many dynos run on each instance, we’re only ensuring more predictable and fair access to resources. Also note that Performance, Private, and Shield type Dynos are not affected because they run on dedicated instances.

Want to see what we’re working on next or suggest improvements for Heroku? Check out our roadmap on GitHub! Curious to learn about all the other recent enhancements we’ve made to Heroku? Check out the ‘22 roundup and Q1 ’23 News blog posts.

Originally published: April 11, 2023

Browse the archives for engineering or all blogs Subscribe to the RSS feed for engineering or all blogs.