Written by Jared Haworth on
A lot has been said about running Rails applications on shared hosting, most visibly in these two articles by David Heinemeier Hansson and Dallas Kashuba (on the DreamHost blog). For the past eight months, Alloy Code and Your Garage Online have been running on a single 256mb slice from Slicehost, and while it was possible to get the two sites to happily co-exist, I’ve always regarded it as something of a delicate house of cards, just waiting for a gust of wind or slammed door in a neighboring apartment to knock the whole stack down. Well, two weeks ago, I made a serious change in back-end configuration, and I couldn’t be happier. Indulge me for a moment, because I feel that a little history is appropriate…
Initially, the slice was configured with Apache 2.2 acting as a proxy for two mongrel clusters. The resources of the slice itself allowed me to run two mongrels for the blog, and three for Your Garage Online. Since the majority of the blog’s content is cached static pages, two mongrels seemed like plenty, and only suffered a slight delay in attempting to access the admin interface, or other dynamic content.
Then I came across Ezra’s article on nginx, and I invested the better part of a weekend switching from a pure Apache/Mongrel setup, to something of a strange hybrid. Since nginx wouldn’t honor SVN webdav connections properly, I had to keep an Apache instance handy, but restricted to listening on a high-numbered port, with nginx forwarding requests intended for my SVN repositories back into Apache. Meanwhile, nginx had two other listeners set up, one forwarding Alloy Code traffic to the blog’s mongrel cluster, and one forwarding Your Garage Online traffic to the other mongrel cluster.
A few months ago, I heard about Thin while listening to the Rails Envy Podcast, and the idea of using Unix sockets instead of TCP to forward the proxy requests really appealed to me. Unfortunately, even though I had upgraded my slice’s copy of Ruby to 1.8.6, I was never able to get the Thin gem to install. Later, I discovered it was because my RubyGems installation was linked against the older, 1.8.5 version of the ruby binary. I never did find a good way to switch the
gem command’s installed ruby version, I had to reinstall RubyGems from scratch using the 1.8.6 binary to call the setup.rb file.
So, when Passenger came out, the configuration-tweaker in me was very excited to give it a try. Once again, I set aside the better part of a weekend to get the installation going. After sorting out the gems issue above, and recompiling Apache to include prefork support, I was ready to roll.
The installation instructions provided were enough to get me 90% of the way there. I had overlooked the fact that, eight months ago, I told Apache to only listen for connections on port 8010 (part of the nginx-svn debacle). And, it turns out that when running two rails applications on a single host, with a single IP address, I needed to provide a little extra context to make certain the static content was handled properly.
Since the hostname of the box itself is alloycode.com, the setup for the blog is as sparse as the Passenger sample:
<VirtualHost *:80> ServerName alloycode.com DocumentRoot /path/to/blog/public </VirtualHost>
Setting up Your Garage Online was a little tricker. My first attempt was just to mirror the same code as above, but with the proper
DocumentRoot settings. Unfortunately, that meant that while the Rails stack did load, and process the requests properly, none of the stylesheets, images, javascrtips, or other static assets could be loaded. After carefully investigating each of the options in the Passenger documentation, I managed to put together this VirtualHost definition that seemed to do the trick:
<VirtualHost *:80> ServerName yourgarageonline.com ServerAlias www.yourgarageonline.com DocumentRoot /path/to/ygo/public <Directory "/path/to/ygo/public"> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> RailsBaseURI /
And like that, it was as if a switch was flipped, and everything was working 100%. It’s now been two weeks since I put Passenger on that slice, and I haven’t had any outages or runaway memory problems, and I no longer have to worry about my web server and my mongrel cluster getting out of sync, resulting in dreaded 503 Service Unavailable errors.
Kudos to the Phusion folks, this is one incredible release.