Can you combine vSphere Host Cache and vFlash on a single SSD?

One of the new features in vSphere 5.5 is the vSphere vFlash that enables you to use a SSD/Flash device as a read cache for your storage. Duncan Epping has a series of posts on vSphere Flash Cache that is well worth a read.

vSphere vFlash caches your read IOs, but at the same time you can use it as a swap device if you run into memory contention issues. The vSphere vFlash Host Cache is similar to the older Host Cache feature, but if you are upgrading from an older version of ESXi there is a couple of things that needs to be done to be able to use this feature. Screen Shot 2013-09-30 at 02.19.57

If you had the “old” Host Cache enabled before upgrading to v5.5, you have to delete the dedicated Host Cache datastore and re-create a new vSphere vFlash resource to be able to use both vFlash Host Cache and vSphere Flash Read Cache on the same SSD/Flash device.

Also note that vFlash Read Cache is only available for VMs that run in ESXi 5.5 Compatibility Mode aka Virtual Hardware Version 10, and is enabled pr. VMDK in the VMs settings.

Screen Shot 2013-09-30 at 02.19.57Now you can utilize vFlash to both accelerate your read IOs, and speed up your host if you run into swapping issues. Good deal!

Testing VMware vSphere 5 Swap to Host Cache

A little while ago I fitted a small 64GB SSD disk to my HP MicroServer just to have a quick look at the new vSphere 5 feature Swap to Host Cache, where vSphere 5 reclaims memory by storing the swapped out pages in the host cache on a solid-state drive. Naturally, this is a lot faster than swapping to non-SSD storage, but you will still see a performance hit when this happens. For more details on Swap to Host Cache, have a look at Swap to host cache aka swap to SSD? by Duncan Epping.

Now, in my miniscule home lab setting it’s somewhat hard to get some real tangible performance metrics, so my little experiment is non-scientific and only meant to illustrate how swap to host cache gone wild would look in a real world environment.

After installing the SSD drive, and configuring Swap to Host Cache, I created two VMs ingeniously called hostcacheA and hostcacheB. Both were configured with 14GB of memory, which should nicely overload my host that has a whopping 8GB of memory in total.

Now, with memory features like ballooning, transparent page sharing, and memory compression I needed to make sure that the actual memory was used, and in addition it had to contain different datasets to make sure that the host cache actually kicked in.

To make sure of this, I downloaded the latest ISO version of Memtest86+ and connected it to the empty VMs.

When starting the VMs, they immediately started testing their available memory and sure enough, they started eating into the host cache.

As you can see from the screenshot below, the longer the memtest ran the more host cache was utilized.
Bonus points for figuring out when the test VMs were shut down…

So there it is, performance graphs showing that the host cache is indeed kicking in and getting a run for it’s money. Since this was a non-scientific experiment, I don’t have any real performance counters or metrics to base any sort of conclusion on. All I was after was to see if it came alive, and clearly it did.