Dear John

No, this is not a farewell post, but rather the opposite. It’s Dr John Troyer’s birthday!

John lives and breathes his role, as Senior Social Media Strategist at VMware, and I have to say that one of the most brilliant moves VMware has done is to employ John in his current role. Lots of other corporation employ marketing people in their social media roles, VMware went the other way and put the very technically savvy Dr. John in the driving seat. To be honest, I don’t think he even has a rear-view mirror, as he’s constantly evolving and moving forward. Well played VMware, and extremely well executed John.

The vExpert program, that I’m lucky enough to be a part of for 2011, is his brainchild and if there ever should be a honorary vExpert to someone that goes above and beyond his job role it should be awarded to John himself.

In fact, Johns presence in the social media space has helped immensely in creating the community that resolves around VMware products and virtualization as a whole. John has been, and continues to be, instrumental in keeping everyone in the loop and helping out wherever he possibly can.

On a more personal level, I’ve been lucky enough to meet the mammoth wookie on more that one occasion, firstly at VMworld 2010 in Copenhagen and then again (a couple of times) during Tech Field Day #6 in Boston 2011. As the good sport John is, he even contributed in a special vSoup episode! John even contacted me way back in 2006, when the virtualization blogosphere was in it’s infancy, asking if I wanted to have my old site featured on V12n!

Being an european ignorant, I’m possibly stepping on a lot of toes here, but as soon as I see Dr. John Troyer written somehere, I immediately think of Dr. J. Dr.J was a four time MVP and inducted into the NBA Hall of Fame in 1993.

If there ever was a VMware Hall of Fame board somewhere in Palo Alto, Dr. John should have his employee number retired and his beard put on display.

John, this one is for you. You’ve been an inspiration, mentor and all around great guy for years and years on end. A while ago, you asked on Twitter if ThinApp was VMware’s best kept secret. The real answer to that is a big fat NO. John, you’re the secret and for purely selfish reasons I hope the VMware management never finds out, and kicks you further up the VMware food chain. We small fish need large wookies to keep tabs on us, and help us feeling all cozy and warm.

Happy birthday John!

SMB Shared Storage Smackdown – Part 1 NFS Performance

Recently at the office I was given the task to test out some SMB NAS products to use as potential candidates for some of our small branch offices all over the world.  I did many tests relating from backup and replication to actually running VMs on them and pounding them with IOmeter.  What I will share with you in this series of  posts is my vSphere/IOmeter tests for NFS and iSCSI. With these tests my main goal was to see what kinds of IO loads the NAS devices could handle and also how suitable they would be for running a small vSphere environment.  In my next post I will go into iSCSI performance and will publish all of my results including NFS into a downloadable PDF.

NAS Devices

  • Synology DS411+
  • NetGear ReadyNAS NV+
  • QNAP TS4139 Pro II+
  • Iomega StorCenter ix4-200d

I chose all 4 drive models of the arrays and they are all filled with 1 TB SATA disks.  It was chosen this way since we would also be using these devices to hold rather large backups and replicate them elsewhere.

To start out with I upgraded all of them to the latest firmware and created RAID 5 arrays on each of them.  To make a long story short this gave me anywhere from 2.5 TB – 2.8 TB usable storage on each device.  Since I tested both NFS and iSCSI I first created a 1 TB iSCSI Lun (1MB block size on Datastore) on each device then created an NFS export for the remainder of the space.  Another small note is I was sure that write cacheing was enabled for all arrays that had an option to have to have turned on or off.  Then I got down to setting up vSphere and the rest of my hardware.

Server/VM/Network Infrastructure

  • 1 HP DL380 G5 – 2 quad core pCPUs – 16 logical cores with HT enabled 16GB of pRAM – ESXi 4.1 U1 installed default configuration
  • Win2k8 VMs on each NAS Device 24GB boot device VMDK 100GB VMDK with standard LSI logic SAS connector 1 vCPU and 4096 vRAM
  • DELL Powerconnect 5524 Switch – Split up into VLAN for VMs/vSphere Management and VLAN for iSCSI/NFS traffic

I began the task of plugging everything in, getting everything set up properly as not to skew any results, and spinning up VMs in the Datastores from the attached iSCSI Luns/NFS Exports.  It is important to note that for each Shared Storage Datastore I created a new VM in the exact specifications as above via Template and  aligned all disks to 64k. For the connection to the storage I only had 1 extra 1gbe Nic per server.  Then in ESXi I created a seperate standard vSwitch just for iSCSI/NFS traffic.  If you are interested in the setup of my lab infrastructure please Contact me and I will be happy to go more indepth.

 

Testing Method and Results Collection

After playing around with Iometer for some time and searching around to find a standard I decided to use the tests from the very popular VMware communities Open unofficial storage performance thread.  The exact ICF file I used can also be found there for download if you would like to do some of your own tests.  Regardless of the age of some of the posts I still think this is the most relevent and fair measure possible.  These tests include

  • Max Throughput-100% Read
  • RealLife-60% random – 65% Read
  • Max Throughput-50% Read
  • Random-8k 70% Read

First important thing to mention is only the VM generating the load was powered on during each test so there should be no skewing based on this. I put the IOmeter files into a set of Excel bar graphs.  I decided to base my results on what I call the big 3; Total IOps, MBps, and Average latency.

NFS Final Performance Results

NFS Max Throughput 100% Read

 

 RealLife-60% random – 65% Read


 

 

Max Throughput-50% Read

 

 

Random-8k 70% Read

 

Final Conclusions For NFS and closing

As you may have noticed by the assorted graphs the two winners to come out in the NFS performance tests were QNAP and Synology.  QNAP appears to be slightly better at more Random workloads such as a real life vSphere environment and Synology seems to be way ahead of most of the arrays with solid sequential storage access – which would be perfect for a backup device.  However, they all seem to have high Average Response Times during Random workloads which in my opinon makes or breaks how well an environment runs.  From this first look I would say most of these NAS devices would be just fine for shared storage in a very small lab environment, a possible backup target, or for something simple as a fileserver volume.  In my next post we can put it all together with the iSCSI results and declare the final winner of the SMB Shared Storage Smackdown!!!

 

vSphere 5 and new licensing – Good or bad?

As many of you did I watched todays Cloud Infrastructure Forum and the release of vSphere 5 today.  I was very excited with many of the features such as Storage Profiling, Storage DRS, VMFS 5 release, and they have blown the top off of the resource limits on VMs to create Monster VMs – just to mention a few.  However, one topic I notice causing quite a stir is the new licensing that seemed to be very briefly mentioned at the end of the webinar. To quote VMware in page 3 the vSphere 5 licensing guide:

 

vSphere 5.0 will be licensed on a per-processor basis with a vRAM entitlement. Each vSphere 5.0 CPU license will entitle the purchaser to a specific amount of vRAM, or memory configured to virtual machines. The vRAM entitlement can be pooled across a vSphere environment to enable a true cloud or utility based IT consumption model. Just like VMware technology offers customers an evolutionary path from the traditional datacenter to cloud infrastructure, the vSphere 5.0 licensing model allows customers to evolve to a cloud-like “pay for consumption” model without disrupting established purchasing, deployment and license- management practices and processes.

 

This caused quite an uproar on twitter of people complaining that it would raise their licensing costs. My personal opinion on the new licensing is both negative and positive.  For every negative side I see in something I always try to put a positive spin on it.  Firstly it is true that this may cause some highly consolidated shops to have to reasses their infrastructure before they upgrade to vSphere 5.  It may require purchase of more licenses to obtain more pooled vRAM to be on the legal side of the licensing.  It may also slow adoption as people have to perform audits on their infrastructure to determine what will be needed for the new licensing model.  Also for some of the big memory packed beast servers this may prove to be a disadvantage. As I have heard thru the vSphere 5 licensing guide there is no hard limit and vSphere will not stop you from deploying VMs for every licensing model but Essentials which there is actually a hard limit.

On a positive note; as a vSphere Admin this licensing may make my life easier.  When application owners realize that there is a charge based on memory use and they may need to sign a purchase order to get their oversized machine approved instead of making their application more efficient they may change their tune a bit.  This means less vm sprawl and more focus on what exactly is running in the environment and is it running at its absolute best and most efficient. Also  If there is a zombie VM comsuming some valuable vRAM I am sure it will also be found and dispatched more quickly than with the current licensing model.