First-generation data center x86 server virtualization took existing hardware and made it more efficient. Organizations that take the next step-from consolidation to virtual operations-will combine much bigger virtual machines on increasingly beefy physical host systems.
How big? At the VMworld 2011 conference in the late summer, VMware ushered in the era of the giant virtual machine that can be configured with up to 32 virtual CPU cores and 1TB of RAM. As revealed Sept. 14 at the Microsoft BUILD conference, the Windows Server 8 is said to be able to create virtual machines that also provide up to 32 virtual CPU cores and up to 512GB of RAM. And Microsoft’s Hyper-V, as implemented in Windows Server 8, showed that the newly released developer preview is designed for physical systems that are equipped with up to 160 logical processors and up to 2TB of physical RAM.
While data center virtual computing is a mix of CPU, memory, networking and storage, I’m confining myself here to the physical components “inside the box,” CPU and RAM. The bottom line for IT managers is the return of best-practice concepts honed in the mainframe era but now given a uniquely x86 twist. Reliability, availability and serviceability in the virtual data center is expressed in commodity servers that are scaled out, beefed up and chained together so that a single physical machine or component can fail without bringing the business to a halt.
What will physical servers look like in just two or three years? At a Microsoft developer preview workshop at the company’s headquarters earlier this month, officials said that guidance from hardware OEMs indicated that commodity servers would have between six and 64 logical cores, 64GB of RAM, a 40Gbit network interface card and cost $300 to $1,500.
VMware user George L. Reed, CIO at Seven Corners, a travel and specialty insurance provider based in Carmel, Ind., shared his thoughts about next-generation data center servers.
“We took 100 physical servers and virtualized them [in November 2010],” Read explained. “Today, we have about 150 virtual servers running. Working with our systems integration partner, we’ve moved onto a Cisco Unified Computing System/VMware vSphere implementation.
“We are looking at adding additional cloud equipment into the racks that were emptied when we did our initial virtualization. We are looking at applications that add picture and media and content. We are building today for a petabyte of data storage and a commensurate rise in CPU count,” Reed said.
Brains Count, Speed Matters
CPU socket counts are likely to stay relatively low as logical core counts rise and the amount of RAM burgeons in the data center servers of tomorrow. A wave of virtualization that started around 2006 is still building strength as the number of virtual servers continues to increase. At the same time, virtualization is pushing up the capacity of physical data center servers.
According to Malcolm Ferguson, enterprise solution architect at HP, typical rack-mount and blade-server configurations are changing. “These servers had [in 2006] an average of 4GB of RAM where today they can have 200GB RAM, on average.” And some current server designs can accommodate 256GB or even 1TB of RAM. And according to Ferguson, RAM requirements, especially for virtual machines, have jumped. “It was common for applications to specify from 2 to 4GB of RAM before the wide-scale adoption of virtualization. Now, it’s not uncommon to see 24, 48, 96GB of RAM specified for single servers.”
The increasing RAM use has caused much of the concern in VMware’s changed licensing procedures. The so-called “RAM tax” in VMware’s initial vSphere 5.0 license revision created a license fee based on the amount of vRAM allocated among virtual machines (which might be less than the total amount of physical RAM installed.) VMware reacted to user concerns and modified the initial license terms, while retaining the basic premise of using vRAM allocation as the basis for license fees. It remains to be seen how this non-technical aspect of server virtualization will affect physical server configurations of the future.
Physical Traits Evolve
Already as a result of virtualization, servers from Dell and HP and other manufacturers have added Secure Digital (SD) memory card slots inside the server to rapidly boot up the physical system into the enterprise hypervisor of choice. While the manufacturers I talked to wouldn’t speculate on other hardware changes planned for the future, IT managers should expect hardware add-on components that speed hypervisor operations.
It is clear that reliability, availability and serviceability will only grow in importance as the concentration of virtual machines on physical systems increases. Chad Fenner, director of product marketing for Dell’s PowerEdge server portfolio, and HP’s Ferguson both pointed out areas where their respective, competitive server-management tools can help make operations more predictable.
For example, preemptive hardware failure notification features already found in InSight Control help make the hardware serviceable and available. On-board server hardware sensors monitor memory, disk and CPU performance, and keep watch for symptoms that indicate an eminent fault will become more common across data center servers. And servers to host production workloads will increasingly have this monitoring telemetry with a focus on integration with the hypervisor monitoring tools that are already available.
Besides hardware modifications, server virtualization has forced growth in server-management tools. Both Dell, with OpenManage, and HP, with Insight Control, today offer integration with VMware vCenter and Microsoft System Center Virtual Machine Manager. IT managers should expect to see even greater integration with these management platforms and a tie-in to virtual machine lifecycle-management tools that show operational costs along with service levels. The bigger physical and virtual machines of the near future will also be among the most monitored and measured systems in enterprise operations.