Rock the Vote 2014

3596091232_eb6f087aa8_b-2

Once again, it’s time to vote for the top VMware & virtualization blogs.  As usual Eric Siebert has opened up the floodgates and set up a voting system, and once again managed to create a lot of work for himself.

So, let’s all make it worth his while and get as many votes in as possible! There are a lot of blogs listed, this time there are over 300 in total.

Cast your vote, and get more information about the process:

Voting now open for the 2014 top VMware & virtualization blogs

Helpful hint: vNinja.net is listed in the general section, as well as under independent bloggers, and vSoup is listed under Podcasts.

Now go do your part.

Header image used under Creative Commons License (c) Tom Ryan

Configuring VSAN on a Dell PowerEdge VRTX

Dell PowerEdge VRTX

OriginalJPG

The Dell PowerEdge VRTX shared infrastructure platform is interesting, and I’ve been lucky enough to be able to borrow one from Dell for testing purposes.

One of the things I wanted to test, was if it was possible to run VMware VSAN on it, even if the Shared PERC8 RAID controller it comes with is not on the VMware VSAN HCL, nor does it provide a method to do passthrough to present raw disks directly to the hosts.

My test setup consists of:

  • 1 Dell PowerEdge VRTX
    • SPERC 8
      • 7 x 300GB 15k SAS drives
    • 2 x Dell PowerEdge M520 blades
      • 2 x 6 core Intel Xeon E5-2420 @ 1.90Ghz CPU
      • 32 GB RAM
      • 2 x 146GB 15 SAS drives

Both M520 blades were installed with  with ESXi 5.5, which is not a supported configuration from Dell. Dell has only certified ESXi 5.1 for use on the VRTX, but 5.5 seems to work just fine, with one caveat: Drivers for the SPERC8 controller is not included in the Dell customized image for ESXi 5.5. To get access to the volumes presented by the controller,  the 6.801.52.00 megaraid-sas driver needs to be installed after ESXi 5.5. 

Once that is installed, the volumes will appear as storage devices on the host.

Sadly the SPERC8 controller does not support passthrough for disksin the PowerEdge VRTX chassis, something VSAN wants (For details check VSAN and Storage Controllers). For testing purposes though, there is a way around it.

By creating several RAID0 Virtual Volumes on the controller, each one with only one disk in it, and assigning these disks to dedicated hosts in the chassis it is possible to present the disks to ESXi in a manner that VSAN can work with:

Dell PowerEdge VRTX VSAN Virtual Disks

A total of six RAID0 volumes have been created, three for each host.

Dell PowerEdge VRTX Disk Assignment

Each host gets granted exclusive access to three disks, resulting in them being presented as storage devices in vCenter

RawDisksVRTX

Since I don’t have any SSD drives in the chassis, something that is a requirement of VSAN, I also had to fool ESXi into believing one of the drives was in fact SSD. This is done by changing the claim rule for the given device. Find the device ID in the vSphere Client, and run the following commands to mark it as an SSD:

(Check KB2013188 Enabling the SSD option on SSD based disks/LUNs that are not detected as SSD by default for details.)

1
2
~ # esxcli storage nmp satp rule add --satp VMW_SATP_LOCAL --device naa.6b8ca3a0edc7a9001a961838899ee72a --option=enable_ssd
~ # esxcli storage core claiming reclaim -d naa.6b8ca3a0edc7a9001a961838899ee72a

Once that part is taken care of, the rest of the setup is done by following the VSAN setup guides found in the beta community.

VSAN Configured

Two Dell PowerEdge M520 nodes up and running, with VSAN, replicating between them inside a Dell PowerEdge VRTX chassis. Pretty nifty!

It is worth noting is that in this setup, the SPERC8 is a single point of failure, as it provides disk access to all of the nodes in the same cluster. This is not something you want to have in a production environment, but Dell does offer a second SPERC8 option for redundancy purposes in the PowerEdge VRTX.

I did not do any performance testing on this setup, mostly since I don’t have SSD’s available for it, nor does it make much sense to do that kind of testing on a total of 6 HDD spindles;
This is more a proof of concept setup than a production environment.

Automatically Name Datastores in vSphere?

IMG_3811

William Lam posted “Why you should rename the default VSAN Datastore name” where he outlines why the default name for VSAN data stores should be changed. Of course, I completely agree with his views on this; Leaving it at the default might cause confusion down the line.

At the end of the post, William asks the following:

I wonder if it would be a useful to have a feature in VSAN to automatically append the vSphere Cluster name to the default VSAN Datastore name? What do you think?

The answer to that is quite simple too; Yes. It would be great to be able to append the cluster name automatically.

But this got me thinking, wouldn’t it be even better would be to use the same kind of naming pattern scheme we get when provisioning Horizon View desktops, when we provision datastores? In fact, this should also be an option for other datastores, not just when using VSAN.

Imagine the possibilities if you could define datastore naming schemes in your vCenter, and add a few variables like this, for instance: {datastoretype}-{datacentername}-{clustername/hostname}-{fixed:03}.

Then you could get automatic, and perhaps even sensible, datastore naming like this:

local-hqdc-esxi001-001
iscsi-hqdc-cluster01-001
nfs-hqdc-cluster01-001
fc-hqdc-cluster01-001
vsan-hqdc-cluster01-001 

And so on… I’m sure there are other potentially even more useful variables that could be used here, perhaps even incorporating something about tiering and SLA´s (platinum/gold/silver etc.) but that would require that you knew the storage characteristics and how it maps to your naming scheme when it gets defined. But yes, we do need to be able to automatically name our datastores in a coherent matter, regardless of storage type.  

After all, we’re moving to a model of policy based computing, shouldn’t naming of objects like datastores, also be ruled by policy, defined at a Datacenter level in vCenter?
(wait a minute, why 
don’t do the same for hosts joined to a datacenter or cluster?)