Brand new server cluster at b5media

This past week, the technology team at b5media threw on the afterburners to get us moved onto an entirely new infrastructure.  After months of planning and investigating a dozen hosting providers, we have finally flipped the switch on the next phase of growth.

Having outgrown our previous datacenter and infrastructure, we have moved to brand new servers located at ServerBeach.  We have taken a different philosophical approach to our infrastructure than we have in the past to allow for future growth.  Rather than custom purpose-built servers we had in our old datacenter, we have moved to plain vanilla commodity servers, which are inexpensive and easily replicated as we grow. 

By moving to more vanilla servers, we are actually decreasing the power of each machine.  Although these machines use newer, faster processors, they are not the high-end quad core ones we had in our old infrastructure.  By sacrificing a bit of power, we save a lot in costs.  To compensate for this, we have employed more machines… almost twice as many.  Without touching anything else, this change alone would have a significant impact to our speed.

We are transitioning away from an NFS mounted shared filesystem to a local filesystem.  Now, our web pages are loaded from a local hard drive instead of a network mapped drive.   This change too, on its own, would give us a huge performance boost.

In order to remove NFS from the infrastructure, we needed a different solution for WordPress caching.  In the past, we were using a combination of WP-Cache and WP-SuperCache which create static files to be served.  We have now rolled out batcache to our sites, which uses memcached to store the blog information.  Again, this change has had a massive impact to our speed.  Initial tests show the performance of batcache to be phenomenal!

We have replaced our hardware-based load balancer to the software-based nginx load balancer.  This allows us to keep to our philosophy of using commodity hardware, while being ridiculously fast.

When you put all of it together, these changes will make our new infrastructure faster and much more robust.   It also lays the foundation to continue to scale out by adding additional machines as needed.  And this is only the first phase with more changes to come! 

Huge kudos to the team for pulling off an extremely complex migration in an unexpectedly short period of time.  The entire team contributed in some way, and especially Lee and Brian plowed through challenge after challenge during the move.  Awesome work!  You guys rock!

8 COMMENTS

  1. I would love to know more about the load balancer, and the reasons behind moving from WordPress to WordPressMU.

    Very cool stuff though. 🙂

  2. Great news! Did you do a cost comparison between ServerBeach dedicated and Amazon EC2 virtual servers?
    EC2 probably doesn’t fit your application model, but I was curious if you looked into it.

  3. @David – We’ll post more about nginx as we get to know what it can and can’t do. We were referred to it by Barry at WordPress.com, who have been using it for thier infrastructure. Barry has a post with more info at http://barry.wordpress.com/2008/04/28/load-balancer-update/

    @MikeD – We did look at Amazon for hosting static content, but we really needed more control over our server infrastructure than AWS provides.

  4. Hey Joe,

    Glad to hear you got off of NFS. I take responsibility for putting it in, though in my defense there were only 3 servers at the time 😉

    Did batcache do anything about the PHP opcode cache sizes? Before I left we had to stuff 250 copies of wordpress in memory, not sure if my fix ever got implemented or not.

    Sean

  5. Hey guys, well done! – minimal fall out on the front end (A few lock outs, but hey) Thanks for all the hard work!

    Looking forward to that performance increase…

    Sime (DPS ADMIN)

  6. @Sean
    batcache doesn’t do much to alleviate the extremely wide variety of php scripts that get hit, that’s still an issue. Our solution is to migrate to wpMU and be done with the 300 installation issue once and for all 😉

    Lee

LEAVE A REPLY

Please enter your comment!
Please enter your name here