Application deployment is changing. In relatively short order I’ve gone from buying hardware, to monthly hosting, to metered CPU time, and from building my open-source software manually, to package managers, to fancy config tools and recipes to pre-build whole machine images. What’s next?
The Old Way
I can deploy Rails apps in a traditional hosting environment pretty quickly. For a small app, I might make a new unix user and database on a personal Slicehost slice and do a quick code checkout. After setting up a few permissions and twiddling my Nginx config, in a matter of fifteen minutes or so my app is online. Not bad at all.
For a bigger app, it takes more time. In days of yore I’d build a server from parts or buy one of the excellent Pogo Linux servers and put it in a colo. OS install, Xen setup, guest OS install, OS package setup, security lockdown, then on to the task of all the stack setup (database, Rails, source control) specific to the application to be run.
Once you get into multiple servers, the complexity multiplies out quickly. There are dozens of small decisions to make about how resources are allocated. More RAM or more CPU for the database machine? One slave database, or two? Hardware load balancer vs. multiple IPs vs. something else? All of these require both detailed knowledge about hardware and software deployments, combined with a huge amount of predictive guesswork to try to foresee the quantity and type of load that the app being deployed is likely to face in the next 3, 6, or 12 months.
There’s an enterprisey word for this process: provisioning.
The New Way
Amazon’s EC2 is the vanguard of the new generation of cloud computing. Provisioning a server was formerly a phone call and days or weeks of waiting. Now it’s a REST call and 30 seconds of waiting. Awesome.
But this is a very raw resource: there are still many provisioning decisions to be made, software to set up, and then on to deployment of the app itself. Excellent services like RightScale and Engine Yard’s new offering Solo can help automate a lot of this process and minimize the management burden. So far, so good.
But what if provisioning was instantaneous, requiring no upfront decisions about resource allocation? What if you didn’t need to think at all about the server hardware or software, but only about your application code? How would this change how we build applications?
When technology breakthroughs make something smaller, or faster, or cheaper, it doesn’t just change current use; it creates whole new types of use. If app deployment is instantaneous, without having to plan for resources, allocate servers, or beg approval from the IT department, what kind of apps will we build that don’t get built today?
In the past decade we’ve seen widespread adoption of agile methodologies in the development of software. This has transformed software development from a slow, failure-prone, and sometimes downright painful process into one that is fast, fun, and fulfilling. But deployment of applications has changed hardly at all during that same time period. The way you deploy a Rails, Merb, Sinatra, or Django app today is very similar to how you deployed a Perl app in 1999.
This coming decade is going to see an agile revolution for the deployment side of the equation. The manual, guesswork-heavy methods of provisioning that we use today are soon to be superseded by methods that will make deploying an app fast, easy, and fun.
No one knows quite what that will look like yet (though at Heroku we certainly have our own opinion), but one thing is for sure: the time is ripe for a revolution in IT.