Configuring VSAN on a Dell PowerEdge VRTX


The Dell PowerEdge VRTX shared infrastructure platform is interesting, and I’ve been lucky enough to be able to borrow one from Dell for testing purposes.

One of the things I wanted to test, was if it was possible to run VMware VSAN on it, even if the Shared PERC8 RAID controller it comes with is not on the VMware VSAN HCL, nor does it provide a method to do passthrough to present raw disks directly to the hosts.

My test setup consists of:

  • 1 Dell PowerEdge VRTX
    • SPERC 8
      • 7 x 300GB 15k SAS drives
    • 2 x Dell PowerEdge M520 blades
      • 2 x 6 core Intel Xeon E5-2420 @ 1.90Ghz CPU
      • 32 GB RAM
      • 2 x 146GB 15 SAS drives

Both M520 blades were installed with  with ESXi 5.5, which is not a supported configuration from Dell. Dell has only certified ESXi 5.1 for use on the VRTX, but 5.5 seems to work just fine, with one caveat: Drivers for the SPERC8 controller is not included in the Dell customized image for ESXi 5.5. To get access to the volumes presented by the controller,  the 6.801.52.00 megaraid-sas driver needs to be installed after ESXi 5.5. 

Once that is installed, the volumes will appear as storage devices on the host.

Sadly the SPERC8 controller does not support passthrough for disksin the PowerEdge VRTX chassis, something VSAN wants (For details check VSAN and Storage Controllers). For testing purposes though, there is a way around it.

By creating several RAID0 Virtual Volumes on the controller, each one with only one disk in it, and assigning these disks to dedicated hosts in the chassis it is possible to present the disks to ESXi in a manner that VSAN can work with:

Dell PowerEdge VRTX VSAN Virtual Disks

A total of six RAID0 volumes have been created, three for each host.

Dell PowerEdge VRTX Disk Assignment

Each host gets granted exclusive access to three disks, resulting in them being presented as storage devices in vCenter


Since I don’t have any SSD drives in the chassis, something that is a requirement of VSAN, I also had to fool ESXi into believing one of the drives was in fact SSD. This is done by changing the claim rule for the given device. Find the device ID in the vSphere Client, and run the following commands to mark it as an SSD:

(Check KB2013188 Enabling the SSD option on SSD based disks/LUNs that are not detected as SSD by default for details.)

[cc lang=”bash” width=”100%” theme=”blackboard” nowrap=”0″]
~ # esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device naa.6b8ca3a0edc7a9001a961838899ee72a –option=enable_ssd
~ # esxcli storage core claiming reclaim -d naa.6b8ca3a0edc7a9001a961838899ee72a

Once that part is taken care of, the rest of the setup is done by following the VSAN setup guides found in the beta community.

VSAN Configured

Two Dell PowerEdge M520 nodes up and running, with VSAN, replicating between them inside a Dell PowerEdge VRTX chassis. Pretty nifty!

It is worth noting is that in this setup, the SPERC8 is a single point of failure, as it provides disk access to all of the nodes in the same cluster. This is not something you want to have in a production environment, but Dell does offer a second SPERC8 option for redundancy purposes in the PowerEdge VRTX.

I did not do any performance testing on this setup, mostly since I don’t have SSD’s available for it, nor does it make much sense to do that kind of testing on a total of 6 HDD spindles;
This is more a proof of concept setup than a production environment.


    1. Yes, you can put an SSD drive in as a local drive in the M520 blade. Or you could put SSD in the shared drive cage that the PERC8 controller access – Both scenarios have merits. I didn’t mean that you could not do that, it’s just that the loaner VRTX I have doesn’t have any SSD drives, and I don’t have any spare ones laying around, hence the “faking” of SSD drives to get VSAN configured.

  1. What are we achiving here. VRTX is designed to share the storage across the nodes and vSAN is also doing the same. VTRX is HW implementation of vSAN and vice versa.
    I dont see any additional benifit in configuring vSAN in VRTX

  2. We are achieving quite a lot actually:

    1. SAN acceleration via Flash/SSD (admittedly not in my test setup, due to the lack of SSD drives, but its possible) – Something that the SPERC8 controller can not do
    2. Storage Policy Based Management (object based storage), policies that follow the VM. Not possible with the SPERC8 controller.
    3. If you have two VRTX boxes, you can replicate between them and get a proper distributed RAIN setup.
    4. Dynamic Scaling, both for performance and capacity. No way to extend an existing volume on the SPERC8

    As you can see, VSAN does more, much more, than the SPERC8 can do. Of course, running it on a single VRTX box doesn’t really give you any redundancy, but that was not the point of the post.

  3. I agree that the VRTX would make an excellent VSAN platform. Imagine a setup with 4 compute nodes on the VRTX chassis. Each node with two SSDs on it (one for VSAN, one spare). 25 1.2TB SAS drives on the chassis itself. Each node on the VSAN would have 1 disk group with 1SSD and 6 SAS drives. Add 10Gb connectivity to the network. Still cheaper and less complex than a traditional compute/storage array model.

    Now put 4 chassis together for a 16 node VSAN cluster or 8 for a 32 node VSAN cluster (the current max.)

    1. I am not sure this would be a very good setup really. It’s as Christian is saying, it is a single controller in the chassis for all the disks, and it kind of defies the point of VSAN where all nodes are their own entities with direct access to disks, and not sharing the throughput with other hosts. Not to mention the redundancy issue with one controller, but that can be mitigated with a second controller. I don’t know how Dell is handling the redundancy though, is it active/passive controller setup, or is it active/active when you have two controllers running? It’s a tempting setup, but as far as best redundancy and performance, I’d prefer a chassis where you could “path in” the disks directly to each node, and use them with pass-through as VSAN is meant to.

  4. Any experiences related to VRTX accepting 3.rd part disks (SSD i.e)? Is there differences in the SPERC8-firmware upgrade versions that adjusts this politics?

Leave a Reply