All posts by Ed Czerwin

Written by Ed is a Systems Engineer specializing in Virtualization and Storage for a large Medical Devices company based near Zurich, Switzerland. He is originally from Chicago and relocated to Europe to aid with a project moving a large SAP environment from US to CH. Over the past 10 years he has gained experience working in Support, Systems Administration, to finally infrastructure design/engineering. He currently holds VCP 4, CCA, and MCP Certifications. He can be reached at @eczerwin on twitter.

Shanghai… we have a problem…

My Saturday morning started the same as any other.  I checked my emails and my tweets, started a coffee, walked my dog and got into the shower.  My iPhone buzzing on the sink caught my attention a few minutes into it. Covered in soap and the rest censored for the public here I answered the call.  Without getting into too many details of my organization – my bosses-bossess-boss contacted me reporting a fire in one of our server rooms in Shanghai China. Trying not to panic I got it together and agreed to meet and discuss ASAP.

 

For privacy reasons lets cut to Monday. I drove to the Chinese Embassy that morning here in Zuerich and begged for a Visa as my plane was leaving at 19:00 that evening. They laughed initially since normally processing time is 7 days.  When they noticed the seriousness of the situation they told me to return in 1 hour and I would be granted a 1 year Visa.

 

Cut to Monday night – I flew from Zuerich to Charles De Gaulle in Paris had a few problems and ran across the entire airport but in the end made my flight. This is normal for changing planes in Paris :). After I got on the plane I shut myself down and forced myself to sleep because I knew I would have a big job on my plate when I arrived. I managed to get 4-6 hours of restless sleep and landed in Shanghai in the afternoon. I called the office and let them know about my arrival, they sent a car and the fun began.

 

When I arrived in the office I found 10 seriously charred physical servers.  Some with cut off melted power plugs and ethernet cables still in them.  I quickly asked them to place stickers on the servers that were priority and explain to me what exactly is the most important application/server to recover first. Again without getting into to much – our backups there were “no longer available

 

I managed to get a critical DB running again by copying the RAID config to disk right before it crashed again, switched the disks over to a loaner server and wrote the RAID config to the controller and quickly began a P2V to a new server I was provided that I installed vSphere 5 on when I arrived. This was only 1 of the many Hail Marys I was able to complete this week.

 

In the end of the week – 72 hours of work later, talking thru translators and a brief departure for some rest I was able to recover all but the oldest server.  I turned 10 physical servers into 2 vSphere 5 hosts with local storage, Better than nothing and flexible enough to change it later as needed.

 

The moral of this story is in the face of disaster one of the best tools you have in your belt is Virtualization. You have flexibility that normally is not possible and can add more resources later as needed with minimal pain.  I know this goes back to basics but sometimes we need to go back to basics to really refresh our thoughts on the technology.

 

 

My VCP 5 exam experience

Like many others I was a VCP 4 and needed to upgrade to VCP 5 by Feb 29th to avoid a pricey class and possible ribbing from my peers.  I was well aware of this deadline since mid December, however, I procrastinated on studying and was mostly flinging myself around the globe doing implementations and having an all around good time.  When Feb 1st came I was sitting on a flight from Saigon to Frankfurt and that is when panic struck.  I realized I had until the end of the month to finish the requirement.  I instantly pulled out my iPad and began frantically combing thru the VCP 5 Blueprint and reading countless documents over the 12+ hour flight.

When I returned home that is when I really began to crack the books.  When I was too tired to keep reading official vSphere Docs or playing in my lab Cody Bunch’s Professional VMware Brownbags were perfect to sit and listen and absorb some info.  It was really helpful as they go thru each point of the specific objective of the day and also get some insight from some guests who had already taken the exam.  Another great resource is Andrea Mauro’s vInfrastructure VCP 5 notes – there was tons of helpful information there.  Also not to be forgotten is MW Preston’s VCP 5 resources.  Finally, I would never study for a VCP exam without using the great practice exams available from Simon Long of the SLOG Blog.  While there are many other great resources available there are too many to list here as this is an experience post.

So after combing thru all these resources hours per night for a couple of weeks I finally booked my exam for Feb 28th.  Then that is when the nerves really set in.  I started to doubt if with 3 weeks of studying I had prepared enough or if I was going to be surprised with a lot of new content.  So I pushed on and continued reading and possibly obsessing over particular objectives I was not 100% comfortable with until the days ran out.

Test day I left work after lunch and drove to the test center.  During check in I was a bit nervous and began having trouble speaking Swiss German to the testing center staff.  I sat down and first took the survey which made me even more nervous knowing the test was coming soon.  When the test began and I got thru the first 20 questions and my nerves began to lighten.  I realized all the vSphere 5 implementations I did recently along with reading up on some of the new features was the ticket.  Without getting to much into details as I am bound by NDA, this exam was more about knowing the product and working with it on a day to day basis rather than straight memorization.  For the VCP 4 I remember spending countless hours memorizing Configuration Maximums and other things that promptly left my head after the test was completed.  After all that is what the Configuration Maximum documents are for!

In conclusion I think that for actual Virtualization professionals that have their hands on the product everyday VCP 5 is much easier than VCP 4.  In the end I only spent an hour and 10 minutes on the exam and passed with a score I was highly pleased with.  My message to current VCP 4 holders is go ahead in and take a shot at the exam.  You might be pleasantly surprised with how VMware has changed the structure of their exams.

 

SMB Shared Storage Smackdown – Part 1 NFS Performance

Recently at the office I was given the task to test out some SMB NAS products to use as potential candidates for some of our small branch offices all over the world.  I did many tests relating from backup and replication to actually running VMs on them and pounding them with IOmeter.  What I will share with you in this series of  posts is my vSphere/IOmeter tests for NFS and iSCSI. With these tests my main goal was to see what kinds of IO loads the NAS devices could handle and also how suitable they would be for running a small vSphere environment.  In my next post I will go into iSCSI performance and will publish all of my results including NFS into a downloadable PDF.

NAS Devices

  • Synology DS411+
  • NetGear ReadyNAS NV+
  • QNAP TS4139 Pro II+
  • Iomega StorCenter ix4-200d

I chose all 4 drive models of the arrays and they are all filled with 1 TB SATA disks.  It was chosen this way since we would also be using these devices to hold rather large backups and replicate them elsewhere.

To start out with I upgraded all of them to the latest firmware and created RAID 5 arrays on each of them.  To make a long story short this gave me anywhere from 2.5 TB – 2.8 TB usable storage on each device.  Since I tested both NFS and iSCSI I first created a 1 TB iSCSI Lun (1MB block size on Datastore) on each device then created an NFS export for the remainder of the space.  Another small note is I was sure that write cacheing was enabled for all arrays that had an option to have to have turned on or off.  Then I got down to setting up vSphere and the rest of my hardware.

Server/VM/Network Infrastructure

  • 1 HP DL380 G5 – 2 quad core pCPUs – 16 logical cores with HT enabled 16GB of pRAM – ESXi 4.1 U1 installed default configuration
  • Win2k8 VMs on each NAS Device 24GB boot device VMDK 100GB VMDK with standard LSI logic SAS connector 1 vCPU and 4096 vRAM
  • DELL Powerconnect 5524 Switch – Split up into VLAN for VMs/vSphere Management and VLAN for iSCSI/NFS traffic

I began the task of plugging everything in, getting everything set up properly as not to skew any results, and spinning up VMs in the Datastores from the attached iSCSI Luns/NFS Exports.  It is important to note that for each Shared Storage Datastore I created a new VM in the exact specifications as above via Template and  aligned all disks to 64k. For the connection to the storage I only had 1 extra 1gbe Nic per server.  Then in ESXi I created a seperate standard vSwitch just for iSCSI/NFS traffic.  If you are interested in the setup of my lab infrastructure please Contact me and I will be happy to go more indepth.

 

Testing Method and Results Collection

After playing around with Iometer for some time and searching around to find a standard I decided to use the tests from the very popular VMware communities Open unofficial storage performance thread.  The exact ICF file I used can also be found there for download if you would like to do some of your own tests.  Regardless of the age of some of the posts I still think this is the most relevent and fair measure possible.  These tests include

  • Max Throughput-100% Read
  • RealLife-60% random – 65% Read
  • Max Throughput-50% Read
  • Random-8k 70% Read

First important thing to mention is only the VM generating the load was powered on during each test so there should be no skewing based on this. I put the IOmeter files into a set of Excel bar graphs.  I decided to base my results on what I call the big 3; Total IOps, MBps, and Average latency.

NFS Final Performance Results

NFS Max Throughput 100% Read

 

 RealLife-60% random – 65% Read


 

 

Max Throughput-50% Read

 

 

Random-8k 70% Read

 

Final Conclusions For NFS and closing

As you may have noticed by the assorted graphs the two winners to come out in the NFS performance tests were QNAP and Synology.  QNAP appears to be slightly better at more Random workloads such as a real life vSphere environment and Synology seems to be way ahead of most of the arrays with solid sequential storage access – which would be perfect for a backup device.  However, they all seem to have high Average Response Times during Random workloads which in my opinon makes or breaks how well an environment runs.  From this first look I would say most of these NAS devices would be just fine for shared storage in a very small lab environment, a possible backup target, or for something simple as a fileserver volume.  In my next post we can put it all together with the iSCSI results and declare the final winner of the SMB Shared Storage Smackdown!!!