Part of my job entails running research computing projects on Microsoft Windows Server 2008/R2 virtual machines, which my specialized hosts custom-configure to support changing requirements. While the superficials might differ, the core challenges I face are the same as those confronted by any enterprise IT leader in determining how best to virtualize a diverse array of applications. The key to success, in both cases, is freedom to experiment – and ultimately to embrace diverse configurations – without undue cost.
This occurred to me today as I was re-reading Microsoft’s January white paper on Private Cloud, which undertakes an exhaustive technology breakdown and cost-comparison between MPC and VMware. The paper shows that Microsoft’s decision to license System Center on a per-processor (rather than per-VM or memory-footprint) basis, and to integrate licensing for multiple products underneath, helps customers build densely efficient private and hybrid clouds much more cost-efficiently. Here, the freedom to explore diverse VM configurations cost-effectively makes for a more-robust operation, better performance, and happier end-users.
Being able to play around with VMs (without being ‘taxed’) is what makes clouds revolutionary: the ability to tune resources to tasks, dial in lots of power and privilege to mission-critical apps, and experiment with thinning-out support for more-peripheral functionality until you find settings that work well and produce consistent ROI on cloud investment.
Microsoft’s Harold Wong, an IT Pro Evangelist for the U.S. Southwest and a frequent contributor on TechNet, has had a lot to say about this recently from several perspectives relevant to MS Private Cloud. On February 20, he tackled the commonsense mechanics of virtualizing Tier 1 biz-critical applications (http://blogs.technet.com/b/haroldwong/archive/2012/02/20/virtualization-is-it-really-possible-to-virtualize-tier-1-business-critical-apps.aspx), including in the process a lot of very useful reminders about how server hardware gets specified in enterprises (often by the database team, which may or may not end up providing ideal platforms for virtualization hosts).
A few days later, he followed with a detailed discussion of how a business might go about configuring VMs to serve required workloads (http://blogs.technet.com/b/haroldwong/archive/2012/02/24/right-sizing-virtual-machines-is-it-really-important.aspx), including Exchange Server 2010. He makes the point (actually, he includes numerous datapoints) that for most businesses, diverse configurations will be the norm, and that – before you’ve evaluated the workloads, the hardware, the requirements exhaustively (and played around) – you can’t know precisely in advance what’s going to end up being the ideal balance point between performance (and/or simple ‘reliable functionality’) and ROI.
For these and other reasons, the freedom to experiment, monitor, and optimize without cost impacts is a good and necessary thing, and is even more critical to smaller- and mid-sized businesses than for large enterprises. A very large datacenter will inevitably provide opportunities of scale, whereas a smaller one, for a smaller business, needs to do all the same things (more or less) on a smaller hardware footprint, so it will typically show more statistical diversity of configuration across any span of hardware and VMs than its larger cousin.
John Jainschigg is a contributing editor to Slashdot and SourceForge.