Running Rails on Heroku Update

On February 16th, we published a blog post outlining five specific and immediate actions we would take to improve our Rails customers' experience with Heroku. We want to provide you with an update on where these things stand. As a reminder, here’s what we committed to do:

  1. Improve our documentation so that it accurately reflects how our service works across both Bamboo and Cedar stacks
  2. Remove incorrect and confusing metrics reported by Heroku or partner services like New Relic
  3. Add metrics that let customers determine queuing impact on application response times
  4. Provide additional tools that developers can use to augment our latency and queuing metrics
  5. Work to better support concurrent-request Rails apps on Cedar

We have resolved the first two items:

  1. Improving Documentation: We’ve updated our Dev Center docs and website to more accurately describe how routing occurs on Heroku (e.g., HTTP Routing on Bamboo and How Routing Works on the Bamboo Stack).

  2. Removing Incorrect Metrics: We’ve worked with New Relic to release an updated version of their monitoring tools, which we documented in Better Queuing Metrics With Updated New Relic Add-on.

We are working on the remaining three items in partnership with our early beta users:

  1. Adding New Metrics: We are improving the data available through our logs in two ways. First, we’re adding a transaction ID to every request log. Second, we are working on application middleware to provide additional metrics in customer logs.
  2. Providing Additional Measurement Tools: We have created a data visualization tool that analyzes log data in real-time and helps observe an app’s performance across a few key Heroku-specific metrics. This tool is currently being beta tested.
  3. Improving Cedar Support: Unicorn is now the recommended Rails app server. We are currently beta testing larger dynos to support additional unicorn processes.

Fulfilling these commitments is Heroku’s number one priority. We have dedicated engineering and product teams focused on improving the performance of Rails apps on Heroku.

We’ll detail the availability of our improvements in future blog posts. If you would like to beta test any of these new features, please drop us a note and let us know which features you are interested in testing.

Jacob Kaplan-Moss, Django Co-Creator, Talks Ecosystems at Heroku's Waza

The value of a network is proportional to the square of the number of users connected to the system, according to Metcalfe’s law. Jacob Kaplan-Moss, co-creator of Django, highlights this as a value in creating communities or as he puts it, “ecosystems”. In his talk at Waza last week on building ecosystems, he went on to highlight three key principles of creating ecosystems:

  1. APIs to support extensibility
  2. Conservatism as a value
  3. Empowering the community

Whether building a platform or a framework, these key principles ring true. Check out his talk or read more on creating ecosystems below:

APIs to support extensibility

You don’t choose to become an ecosystem, but you can choose to do the groundwork. By defining APIs you enable others to build on the foundation you lay. Django core member Russ Keith-McGee summed this up quite clearly on the Django mailing list – "I’m not sure if this is appropriate, but it should be possible. If its not possible we should build the APIs to support it." This sentiment is at the core of building ecosystems.

Django succeeded at this with its apps API, and has a full directory in Django Packages to help others discover what has already been built. At Heroku we’re seeing the same phenomon with our buildpacks API resulting in numerous third-party buildpacks. By building APIs and empowering users, you solve more problems.

Conservatism as a value

There’s great value in the “move fast and break stuff” concept. However, creating explicit APIs and contracts that are stable is critical for an ecosystem to thrive longer term. For Django as a framework this means being slow moving and reliable when it comes to modifying any APIs in the core framework. For Heroku as a platform it means a dedication to erosion resistance. Jacob cleanly highlights this by pointing out that “Interoperabilty only works when the common parts don’t change.”

Empowering the community

Ecosystems create ways for the community to engage and contribute. With an ecosystem there’s even additional ways in which users can contribute and those contributions don’t have to be code. When a project first starts, it’s creator tests it and consumes it acting as developer and QA. As it gains popularity a project may attract new developers, then more users, who submit their own bug reports and issues. These newcomers contribute documentation patches, triage tickets, and recruit others to help with the project. Where once a single developer committed quietly, now a thriving ecosystem of contributors and consumers exists.

This change from product to ecosystem doesn’t happen overnight, it must be nurtured and nudged. Kaplan-Moss, a self-proclaimed “documentation nut” has done just that by cultivating a better understanding of Django through documentation and carefully working with patch and issue submitters. Jacob explains that if you document your process, you can increase transparency and understanding of the way things work. Django does this by clearly documenting how users can contribute back to Django; at Heroku we’ve done this by documenting our methodology on how apps should be built.

Conclusion

Building a great product isn’t enough, as Jacob points out: you’ve got to create a vibrant community and build an ecosystem. If it wasn’t for Django’s community there wouldn’t be over 3500 Python packages for Django. If it wasn’t for our community, you wouldn’t be able to deploy Go, common lisp, or GeoDjango to Heroku with just one command.

Learn more about ecosystems by listening to Jacob’s talk, check out other talks from Waza available now, or engage with the ecosystem of your choice today.

Matz on Ruby 2.0 at Heroku's Waza

Matz, the creator of Ruby, spoke at Waza for the 20th anniversary of the language and the release of Ruby 2.0. If you weren't in the sold out crowd, not to worry. Information should flow free and experiences should be shared; in line with those concepts you can watch Matz's talk right here, then read about what's new in this version of Ruby and how to run it on Heroku.

With slides available on speakerdeck

Keep reading for more information on Ruby 2.0 or check out our first batch of videos from Waza 2013. To stay up to date as we post new videos, follow us @heroku.

Compatibility

Iterating quickly means happier developers and happier customers. We optimize our platform to allow you to develop, stage, and deploy faster. Not only does this make running software easier, but it makes trying new technologies like Ruby 2.0 as simple as spinning up a new app. During Matz's talk at Waza, he mentioned that, while 1.9.3 is popular now, it took years after 1.8.7 was released to gain traction. With the release of Ruby 2.0 Matz hopes to reduce upgrading barriers and allow developers to iterate quicker using newer, faster and better tools.

Ruby 2.0 was written to be backwards compatible and it works with Rails 3.2.13 out of the box. If your Ruby apps are running using 1.8.7, you should upgrade. Ruby 1.8.7 is approaching End of Life (EOL) in three months on June 2013. EOL for Ruby 1.8.7 means no security or bug patches will be provided by the maintainers. Not upgrading means you're potentially opening up your application and your users to vulnerabilities. Don't wait till the final hour, upgrade now to be confident and secure.

Speed

Ruby 2.0 has a faster garbage collector and is Copy on Write friendly. Copy on Write or COW is an optimization that can reduce the memory footprint of a Ruby process when it is copied. Instead of allocating duplicate memory when a process is forked, COW allows multiple processes to share the same memory until one of the processes needs to modify a piece of information. Depending on the program, this optimization can dramatically reduce the amount of memory used to run multiple processes. Most Ruby programs are memory bound, so reducing your memory footprint with Ruby 2.0 may allow you to run more processes in fewer dynos.

If you’re not already running a concurrent backend consider trying the Unicorn web server.

Features

In addition to running faster than 1.9.3, and having a smaller footprint, Ruby 2.0 has a number of new features added to the language including:

The list of new features is more than we can cover here. If you really wanted to dig in you can check the Ruby changelog

Running 2.0 on Heroku

If you’re interested in taking advantage of these new features give it a try on Heroku today. To run Ruby 2.0 on Heroku you'll need this line in your Gemfile:

ruby "2.0.0"

Then commit to git:

$ git add .
$ git commit -m "Using Ruby 2.0 in production"

We recommend that you test your app using 2.0 locally and deploy to a staging app before pushing to production. Now when you $ git push heroku master our Ruby buildpack will see that you've declared your Ruby version and make sure you get the right one.

20 years of simplicity, elegance, and programmer happiness

Heroku, since its founding, has been aligned with the key values of Ruby – simplicity, elegance, and programmer happiness. Heroku still believes in the power and flexibility of Ruby, and we've invested in the language by hiring Yukihiro "Matz" Matsumoto, Koichi Sasada and Nobuyoshi Nakada. We would like to thank them and the whole Ruby core team for making this release happen. Join us in celebrating Ruby's successes and in looking forward to the next twenty years by trying Ruby 2.0 on Heroku today.

Idea to Delivery: Application Development in the Modern Age. Adam Wiggins at Waza 2012 [video]

Great coders know their technology intimately. And they know how to choose it. Truly awesome application developers know more. They know the human side of technology. They know technique. They focus on their method—their practice.

In 2000 Heroku co-founder and CTO Adam Wiggins saw this more clearly than ever before. He read The Pragmatic Programmer by Andy Hunt and Dave Thomas. The book, as Adam explains in this thought-provoking (and method-shifting) Waza talk, showed him that his work could not only be about technology. It HAD to be about technique.

Heroku Co-founder, CTO and general badass, Adam Wiggins

Adam discusses techniques—historical ones such as agile development and the power of rapid and flexible response to change; frameworks which helped drive speed by allowing developers to focus on the specifics of their application without having to reinvent basic wheels of, say, state management and request handling; the cloud which removed a huge burden of selecting, purchasing and maintaining hardware; and Software as a Service (SaaS), which, in addition to providing incredible benefit to end-users has created a development culture of continual, rapid improvement.

Technique means a lot. How we think about and describe what we do means a great deal too. Adam makes a strong point when he compares thinking and describing ourselves as programmers vs. application developers. Application developers, Adam says, think more broadly. They think about the end-to-end process of developing and deploying an application. vs.

vs.

By taking ownership of thinking across the full spectrum from idea to delivery, application developers play a far more strategic role in their app—and their company’s—growth and success. Developers who think and work like this are truly “the new kingmakers” and a “powerful force to be reckoned with.”

Adam shares newer, even more powerful techniques that will help a developer who wants to think more broadly, act more strategically, and increase efficiency in her or his organization. Adam also discusses a number of these in The Twelve-Factor App:

Deploying from Day One and Continuous Development/Deployment

Having one (version controlled) codebase that is deployed in various states of completion across several instances from a live production app/site to development environments of different employees’ machines means we can all move faster and we can all stay in tune with one another. It means we can be deploying from day one and we can rapidly improve our products via continuous development and deployment.

Development and Production Parity: Keeping development, staging, and production as similar as possible

Historically, there have been substantial gaps between development (a developer making live edits to a local deploy of the app) and production (a running deploy of the app accessed by end users). These gaps, as Adam discussed at Waza and at The Twelve-Factor App manifest in three areas:

  • The time gap: A developer may work on code that takes days, weeks, or even months to go into production.
  • The personnel gap: Developers write code, ops engineers deploy it.
  • The tools gap: Developers may be using a stack like Nginx, SQLite, and OS X, while the production deploy uses Apache, MySQL, and Linux.

Staying Close to Production

As we've come to appreciate agile development techniques and put such a sharp focus on shipping features, minimizing each of these gaps allows developers to:

  • Make the time gap small: a developer may write code and have it deployed hours or even just minutes later.
  • Make the personnel gap small: developers who wrote code are closely involved in deploying it and watching its behavior in production.
  • Make the tools gap small: keep development and production as similar as possible.

Conclusion

We have come along way since The Pragmatic Programer. But it remains highly influential and set much of the tone for the many pragmatic developments in technique and practice that have come to the fore in recent years. You could say that Adam reading The Pragmatic Programmer back in 2000 is one of the reasons we invite developers to come together for Waza, which happens tomorrow. And one of the reasons we share the talks freely online for those who cannot make it. Waza is all about technique, about personal improvement for developers, about, as the subtitle of The Pragmatic Programmer says, the journey from journeyman to master.

Adding Concurrency to Rails Apps with Unicorn

With support for Node.js, Java, Scala and other multi-threaded languages, Heroku allows you to take full advantage of concurrent request processing and get more performance out of each dyno. Ruby should be no exception.

If you are running Ruby on Rails with Thin, or another single-threaded server, you may be seeing bottlenecks in your application. These servers only process one request at a time and can cause unnecessary queuing. Instead, you can improve performance by choosing a concurrent server such as Unicorn which will make your app faster and make better use of your system resources. In this article we will explore how Unicorn works, how it gives you more processing power, and how to run it on Heroku.

Concurrency and Forking

At the core of Heroku is the Unix Philosophy, and we see this philosphy at work in Unicorn. Unicorn uses the Unix concept of forking to give you more concurrency.

Process forking is a critical component of Unix's design. When a process forks it creates a copy of itself. Unicorn forks multiple OS processes within each dyno to allow a Rails app to support multiple concurrent requests without requiring them to be thread-safe. This means that even if your app is only designed to handle one request at a time, with Unicorn you can handle concurrent connections.

Unicorn leverages the operating system to do most of the heavy lifting when creating and maintaining these forks. Unix-based systems are extremely efficient at forking, and even take advantage of Copy on Write optimizations that are similar to those in the recently released Ruby 2.0.

Unicorn on Rails

By running Unicorn in production, you can significantly increase throughput per dyno and avoid or reduce queuing when your app is under load. Unicorn can be difficult to setup and configure, so we’ve provided configuration documentation to make it easier to get started.

Let's set up a Rails app to use Unicorn.

Setting up Unicorn

First, add Unicorn to your application Gemfile:

gem 'unicorn'

Run $ bundle install, now you are ready to configure your app to use Unicorn.

Create a configuration file for Unicorn at config/unicorn.rb:

$ touch config/unicorn.rb

Now we're going to add Unicorn-specific configuration options, that we explain in detail in Heroku's Unicorn documentation:

# config/unicorn.rb
worker_processes 3
timeout 30
preload_app true

before_fork do |server, worker|

  Signal.trap 'TERM' do
    puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
    Process.kill 'QUIT', Process.pid
  end

  defined?(ActiveRecord::Base) and
    ActiveRecord::Base.connection.disconnect!
end

after_fork do |server, worker|

  Signal.trap 'TERM' do
    puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to sent QUIT'
  end

  defined?(ActiveRecord::Base) and
    ActiveRecord::Base.establish_connection
end

This default configuration assumes a standard Rails app with Active Record, see Heroku's Unicorn documentation for more information. You should also get acquainted with the different options in the official Unicorn documentation.

Now that we've got your app setup to use Unicorn, you’ll need to tell Heroku how to run it in production.

Unicorn in your Procfile

Change the web command in your Procfile to:

web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb

Now try running your server locally with $ foreman start. Once you're happy with your changes, commit to git, deploy to staging, and when you're ready deploy to production.

A World of Concurrency

With the recent release of the Rails 4 beta, which is threadsafe by default, it's becoming increasingly clear that Rubyists care about concurrency.

Unicorn gives us the ability to take multiple requests at a time, but it is by no means the only option when it comes to concurrent Rack servers. Another popular alternative is Puma which uses threads instead of forking processes. Puma does however require that your code is threadsafe.

If you've never run a concurrent server in production, we encourage you to spend some time exploring the ecosystem. After all no one knows your app's requirements better than you.

Whatever you do don't settle for one request at a time. Demand performance, demand concurrency, and try Unicorn today.

Browse the blog archives or subscribe to the full-text feed.