Tag Archives: Virtualization

VSAN – The Unspoken Future

This rather tongue-in-cheek title, is a play on Maish recent VSAN – The Unspoken Truth post where he highlights what he thinks is one of the hidden “problems” with the current version of VSAN is it’s inherent non-blade compatibility and current lack of “rack based VSAN ready nodes”.

Of course, this is a reality; If you base your current infrastructure on blade servers, VSAN probably isn’t a good match as it stands today. Chances are that if you are currently running a blade-based datacenter, you have traditional external storage on the back end of that, and that you for quite some time will be running a form factor that VSAN simply isn’t designed for. I don’t disagree with Maish in that conclusion, not a bit.

But what about the next server refresh? One of the things that VSAN is a facilitator for, along with enhancements in the storage industry, is the ability to move to other form factors. Currently Supermicro offers their rather nice looking FatTwin™ Server Solution. If we look at what the SYS-F617R2-R72+ box offers, in the total rack space of 4U (less than most blade chassis), it is clear that the form factor choices will not just be tower or blade, the will also include other new form factors that are currently not in the forefront of peoples minds when designing their data center.

Looking at the Supermicro box again, in a 4U rack footprint, it offers these maximums pr node:

  • 2 x Intel® Xeon® processor E5-2600
  • Up to 1TB DDR3 ECC LRDIMM
  • 6x Hot-swap 2.5″ SAS2/SATA HDD trays

So, in 4U, you can get 16 CPU, 8TB RAM and 48 SAS2/SATA bays.  Stick a couple of those in your rack, with a few 10GbE ports, and then try to do something similar with a blade infrastructure!

Now, of course, VSAN isn’t for everyone, nor is it designed to be. In a way VSAN offers a peak to the future of datacenter design, in the same way that it shows us that Software Designed Data Center (SDDC) is not just about the software, it’s about how we think, manage AND design our back-end infrastructure. It’s not just storage vendors that need to take heed and look at what they are offering, the same also goes for “traditional” server/node vendors.

That’s right, a server is becoming a node and which vendors sticker is in the front, might not matter that much in the future.

The future is already here – it’s just not evenly distributed.
— William Gibson

[Header photo credit: www.LendingMemo.com]

Automatically Name Datastores in vSphere?

William Lam posted “Why you should rename the default VSAN Datastore name” where he outlines why the default name for VSAN data stores should be changed. Of course, I completely agree with his views on this; Leaving it at the default might cause confusion down the line.

At the end of the post, William asks the following:

I wonder if it would be a useful to have a feature in VSAN to automatically append the vSphere Cluster name to the default VSAN Datastore name? What do you think?

The answer to that is quite simple too; Yes. It would be great to be able to append the cluster name automatically.

But this got me thinking, wouldn’t it be even better would be to use the same kind of naming pattern scheme we get when provisioning Horizon View desktops, when we provision datastores? In fact, this should also be an option for other datastores, not just when using VSAN.

Imagine the possibilities if you could define datastore naming schemes in your vCenter, and add a few variables like this, for instance: {datastoretype}-{datacentername}-{clustername/hostname}-{fixed:03}.

Then you could get automatic, and perhaps even sensible, datastore naming like this:

local-hqdc-esxi001-001
iscsi-hqdc-cluster01-001
nfs-hqdc-cluster01-001
fc-hqdc-cluster01-001
vsan-hqdc-cluster01-001 

And so on… I’m sure there are other potentially even more useful variables that could be used here, perhaps even incorporating something about tiering and SLA´s (platinum/gold/silver etc.) but that would require that you knew the storage characteristics and how it maps to your naming scheme when it gets defined. But yes, we do need to be able to automatically name our datastores in a coherent matter, regardless of storage type.  

After all, we’re moving to a model of policy based computing, shouldn’t naming of objects like datastores, also be ruled by policy, defined at a Datacenter level in vCenter?
(wait a minute, why 
don’t do the same for hosts joined to a datacenter or cluster?)

 

Can you combine vSphere Host Cache and vFlash on a single SSD?

One of the new features in vSphere 5.5 is the vSphere vFlash that enables you to use a SSD/Flash device as a read cache for your storage. Duncan Epping has a series of posts on vSphere Flash Cache that is well worth a read.

vSphere vFlash caches your read IOs, but at the same time you can use it as a swap device if you run into memory contention issues. The vSphere vFlash Host Cache is similar to the older Host Cache feature, but if you are upgrading from an older version of ESXi there is a couple of things that needs to be done to be able to use this feature. Screen Shot 2013-09-30 at 02.19.57

If you had the “old” Host Cache enabled before upgrading to v5.5, you have to delete the dedicated Host Cache datastore and re-create a new vSphere vFlash resource to be able to use both vFlash Host Cache and vSphere Flash Read Cache on the same SSD/Flash device.

Also note that vFlash Read Cache is only available for VMs that run in ESXi 5.5 Compatibility Mode aka Virtual Hardware Version 10, and is enabled pr. VMDK in the VMs settings.

Screen Shot 2013-09-30 at 02.19.57Now you can utilize vFlash to both accelerate your read IOs, and speed up your host if you run into swapping issues. Good deal!