Loading…

Is Your Blade Ready for Virtualization? A Math Lesson.

I attended the second day of the HP Converged Infrastructure Roadshow in NYC last week. Most of the day was spent watching PowerPoints and demos for the HP Matrix stuff and Virtual Connect. Then came lunch. I finished my appetizer and realized that the buffet being set up was for someone else. My appetizer was actually lunch! Thanks God there was cheesecake on the way…

There was a session on unified storage, which mostly covered the LeftHand line. At one point, I asked if the data de-dupe was source based or destination based. The “engineer” looked like a deer in the headlights and promptly answered “It’s hash based.” ‘Nuff said… The session covering the G6 servers was OK, but “been there done that.”

Other than the cheesecake, the best part of the day was the final presentation. The last session covered the differences in the various blade servers from several manufacturers. Even though I work for a company that sells HP, EMC and Cisco gear, I believe that x64 servers, from a hardware perspective, are really generic for the most part. Many will argue why their choice is the best, but most people choose a brand based on relationships with their supplier, the manufacturer or the dreaded “preferred vendor” status.  Obviously, this was an HP – biased presentation, but some of the math the Bladesystem engineer (I forgot to get his name) presented really makes you think.

Lets start with a typical configuration for VMs. He mentioned that this was a “Gartner recommended” configuration for VMs, but I could not find anything about this anywhere on line. Even so, its a pretty fair portrayal of a typical VM.

Typical Virtual Machine Configuration:

  • 3-4 GB Memory
  • 300 Mbps I/O
    • 100 Mbps Ethernet (0.1Gb)
    • 200 Mbps Storage (0.2Gb)

Processor count was not discussed, but you will see that may not be a big deal since most processors are overpowered for todays applications (I said MOST). IOps is not a factor either in these comparisons, that would be a factor of the storage system.

So, let’s take a look at the typical server configuration. In this article, we are comparing blade servers. But this is even typical for a “2U” rack server. He called this an “eightieth percentile” server, meaning it will meet 80% of the requirements for a server.

Typical Server Configuration:

  • 2 Sockets
    • 4-6 cores per socket
  • 12 DIMM slots
  • 2 Hot-plug Drives
  • 2 Lan on Motherboard (LOM)
  • 2 Mezzanine Slots (Or PCI-e slots)

Now, say we take this typical server and load it with 4GB or 8GB DIMMs. This is not a real stretch of the imagination. It gives us 48GB of RAM. Now its time for some math:

Calculations for a server with 4GB DIMMs:

  • 48GB Total RAM ÷ 3GB Memory per VM = 16 VMs
  • 16 VMs ÷ 8 cores = 2 VMs per core
  • 16 VMs * 0.3Gb per VM = 4.8 Gb I/O needed (x2 for redundancy)
  • 16 VMs * 0.1Gb per VM = 1.6Gb Ethernet needed (x2 for redundancy)
  • 16 VMs * 0.2Gb per VM = 3.2Gb Storage needed (x2 for redundancy)

Calculations for a server with 8GB DIMMs:

  • 96GB Total RAM ÷ 3GB Memory per VM = 32 VMs
  • 32 VMs ÷ 8 cores = 4 VMs per core
  • 32 VMs * 0.3Gb per VM = 9.6Gb Ethernet needed (x2 for redundancy)
  • 32 VMs * 0.1Gb per VM = 3.2Gb Ethernet needed (x2 for redundancy)
  • 32 VMs * 0.2Gb per VM = 6.4Gb Storage needed (x2 for redundancy)

Are you with me so far? I see nothing wrong with any of these yet.

Now, we need to look at the different attributes of the blades:

2009-12-31_112613

* The IBM LS42 and HP BL490c Each have 2 internal non-hot plug drive slots

The “dings” against each:

  • Cisco B200M1 has no LOM and only 1 mezzanine slot
  • Cisco B250M1 has no LOM
  • Cisco chassis only has one pair of I/O modules
  • Cisco chassis only has four power supplies – may cause issues using 3-phase power
  • Dell M710 and M905 have only 1GbE LOMs (Allegedly, the chassis midplane connecting the LOMs cannot support 10GbE because they lack a “back drill.”)
  • IBM LS42 has only 1GbE LOMs
  • IBM chassis only has four power supplies – may cause issues using 3-phase power

Now, from here, the engineer made comparisons based on loading each blade with 4GB or 8GB DIMMs. Basically, some of the blades would not support a full complement of VMs based on a full load of DIMMS. What does this mean? Don’t rush out and buy blades loaded with DIMMs or your memory utilization could be lower than expected. What it really means is that you need to ASSESS your needs and DESIGN an infrastructure based on those needs. What I will do is give you a maximum VMs per blade and per chassis. It seems to me that it would make more sense to consider this in the design stage so that you can come up with some TCO numbers based on vendors. So, we will take a look at the maximum number of VMs for each blade based on total RAM capability and total I/O capability. The lower number becomes the total possible VMs per blade based on overall configuration. What I did here to simplify things was take the total possible RAM and subtract 6GB for hypervisor and overhead, then divide by 3 to come up with the amount of 3GB VMs I could host. I also took the size specs for each chassis and calulated the maximum possible chassis per rack and then calculated the number of VMs per rack. The number of chassis per rack does not account for top of rack switches. If these are needed, you may lose one chassis per rack most of the systems will allow for an end of row or core switching configuration.

Blade Calculations

One thing to remember is this is a quick calculation. It estimates the amount of RAM required for overhead and the hypervisor to be 6GB. It is by no means based on any calculations coming from a real assessment. The reason why the Cisco B250M1 blade is capped at 66 VMs is because of the amount of I/O it is capable of supporting. 20Gb redundant I/O ÷ 0.3 I/O per VM = 66 VMs.

I set out in this journey with the purpose of taking the ideas from an HP engineer and attempted as best as I could to be fair in my version of this presentation. I did not even know what the outcome would be, but I am pleased to find that HP blades offer the highest VM per rack numbers.

The final part of the HP presentation dealt with cooling and power comparisons. One thing that I was surprised to hear, but have not confirmed, is that the Cisco blades want to draw more air (in CFM) than one perforated tile will allow. I will not even get into the “CFM pre VM” or “Watt per VM” numbers, but they also favored HP blades.

Please, by all means challenge my numbers. But back them up with numbers yourself.