VMware VSAN – More than meets the eye.

Way back in 2014 I wrote a piece called VSAN – The Unspoken Future, and I think it’s about time it got a revision. Of course, lots of things have happened to VSAN since then and even more is on the way, but I think there is more to this than adding features like erasure coding, deduplication and compression. All of these are important features, and frankly they need to be in a product that aims a lot higher than you might think.

 

At the moment, VSAN does storage internally in a vSphere Cluster. If you want to use that storage in other ways, you either have to share it from a VM over the network or use NexentaConnect for VMware Virtual SAN.

Yesterday, VMUG.it shared the following photo from Duncan EppingsGoodbye SAN Huggers, Hello Virtual SAN” session from their VMUG UserCon:

Generic Object Storage Platform

Look closely at that one for a minute. What is Duncan and VMware telling us here, if you squint your eyes and try to read between the lines? For me, this slide was a bit of a lightbulb moment: VMware wants to turn VSAN into a generic storage provider in the data center. You need some storage of some sort? VSAN will provide it, even if your applications are not located on the same cluster.  Object based? Sure. Block? Sure. REST? Sure, that’s what the cool kids do. VMFS? Only if you need to run a VM.

Couple this with  the vSphere Integrated Containers and Photon Platform announcements, VMware is already talking about the microvisor. So, remove the vSphere layer in the slide above, and replace it with a variant of the VSAN ROBO witness appliance of some sort, which runs enough to provide policy based storage services. Once you have those two bits talking to each other, you don’t need the traditional vSphere layer to provide hardware virtualization at all for those cloud native apps. Add NSX to the mix, and network policies that follow the application, and you have a portable application infrastructure that can run pretty much anywhere you prefer.  At VMworld 2015, VMware showed NSX for Multi-Hypervisor running on AWS, extending the network from on-premises to Amazon. Why not do the same with storage?  Want cloud based storage? Sure, add the little VSAN layer in front of your providers storage offering, and boom, instant policy  management and portability.

And of course, VMware will be there to provide you with the management and monitoring layer for all of this – Even if you don’t run vSphere.

VMware is getting ready for the post-virtualization, multi-platform, world, no question about it.  Are You Ready for Any?

 

VSAN – The Unspoken Future

This rather tongue-in-cheek title, is a play on Maish recent VSAN – The Unspoken Truth post where he highlights what he thinks is one of the hidden “problems” with the current version of VSAN is it’s inherent non-blade compatibility and current lack of “rack based VSAN ready nodes”.

Of course, this is a reality; If you base your current infrastructure on blade servers, VSAN probably isn’t a good match as it stands today. Chances are that if you are currently running a blade-based datacenter, you have traditional external storage on the back end of that, and that you for quite some time will be running a form factor that VSAN simply isn’t designed for. I don’t disagree with Maish in that conclusion, not a bit.

But what about the next server refresh? One of the things that VSAN is a facilitator for, along with enhancements in the storage industry, is the ability to move to other form factors. Currently Supermicro offers their rather nice looking FatTwin™ Server Solution. If we look at what the SYS-F617R2-R72+ box offers, in the total rack space of 4U (less than most blade chassis), it is clear that the form factor choices will not just be tower or blade, the will also include other new form factors that are currently not in the forefront of peoples minds when designing their data center.

Looking at the Supermicro box again, in a 4U rack footprint, it offers these maximums pr node:

  • 2 x Intel® Xeon® processor E5-2600
  • Up to 1TB DDR3 ECC LRDIMM
  • 6x Hot-swap 2.5″ SAS2/SATA HDD trays

So, in 4U, you can get 16 CPU, 8TB RAM and 48 SAS2/SATA bays.  Stick a couple of those in your rack, with a few 10GbE ports, and then try to do something similar with a blade infrastructure!

Now, of course, VSAN isn’t for everyone, nor is it designed to be. In a way VSAN offers a peak to the future of datacenter design, in the same way that it shows us that Software Designed Data Center (SDDC) is not just about the software, it’s about how we think, manage AND design our back-end infrastructure. It’s not just storage vendors that need to take heed and look at what they are offering, the same also goes for “traditional” server/node vendors.

That’s right, a server is becoming a node and which vendors sticker is in the front, might not matter that much in the future.

The future is already here – it’s just not evenly distributed.
— William Gibson

[Header photo credit: www.LendingMemo.com]

Configuring VSAN on a Dell PowerEdge VRTX

OriginalJPG

The Dell PowerEdge VRTX shared infrastructure platform is interesting, and I’ve been lucky enough to be able to borrow one from Dell for testing purposes.

One of the things I wanted to test, was if it was possible to run VMware VSAN on it, even if the Shared PERC8 RAID controller it comes with is not on the VMware VSAN HCL, nor does it provide a method to do passthrough to present raw disks directly to the hosts.

My test setup consists of:

  • 1 Dell PowerEdge VRTX
    • SPERC 8
      • 7 x 300GB 15k SAS drives
    • 2 x Dell PowerEdge M520 blades
      • 2 x 6 core Intel Xeon E5-2420 @ 1.90Ghz CPU
      • 32 GB RAM
      • 2 x 146GB 15 SAS drives

Both M520 blades were installed with  with ESXi 5.5, which is not a supported configuration from Dell. Dell has only certified ESXi 5.1 for use on the VRTX, but 5.5 seems to work just fine, with one caveat: Drivers for the SPERC8 controller is not included in the Dell customized image for ESXi 5.5. To get access to the volumes presented by the controller,  the 6.801.52.00 megaraid-sas driver needs to be installed after ESXi 5.5. 

Once that is installed, the volumes will appear as storage devices on the host.

Sadly the SPERC8 controller does not support passthrough for disksin the PowerEdge VRTX chassis, something VSAN wants (For details check VSAN and Storage Controllers). For testing purposes though, there is a way around it.

By creating several RAID0 Virtual Volumes on the controller, each one with only one disk in it, and assigning these disks to dedicated hosts in the chassis it is possible to present the disks to ESXi in a manner that VSAN can work with:

Dell PowerEdge VRTX VSAN Virtual Disks

A total of six RAID0 volumes have been created, three for each host.

Dell PowerEdge VRTX Disk Assignment

Each host gets granted exclusive access to three disks, resulting in them being presented as storage devices in vCenter

RawDisksVRTX

Since I don’t have any SSD drives in the chassis, something that is a requirement of VSAN, I also had to fool ESXi into believing one of the drives was in fact SSD. This is done by changing the claim rule for the given device. Find the device ID in the vSphere Client, and run the following commands to mark it as an SSD:

(Check KB2013188 Enabling the SSD option on SSD based disks/LUNs that are not detected as SSD by default for details.)

[cc lang=”bash” width=”100%” theme=”blackboard” nowrap=”0″]
~ # esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device naa.6b8ca3a0edc7a9001a961838899ee72a –option=enable_ssd
~ # esxcli storage core claiming reclaim -d naa.6b8ca3a0edc7a9001a961838899ee72a
[/cc]

Once that part is taken care of, the rest of the setup is done by following the VSAN setup guides found in the beta community.

VSAN Configured

Two Dell PowerEdge M520 nodes up and running, with VSAN, replicating between them inside a Dell PowerEdge VRTX chassis. Pretty nifty!

It is worth noting is that in this setup, the SPERC8 is a single point of failure, as it provides disk access to all of the nodes in the same cluster. This is not something you want to have in a production environment, but Dell does offer a second SPERC8 option for redundancy purposes in the PowerEdge VRTX.

I did not do any performance testing on this setup, mostly since I don’t have SSD’s available for it, nor does it make much sense to do that kind of testing on a total of 6 HDD spindles;
This is more a proof of concept setup than a production environment.