Patch those Dell Servers easily!

screenshot-2016-10-10-15-13-59Did you know that Dell has a bootable linux ISO with firmware upgrades for their servers available? Neither did I, but luckily I found it today when needing it at a customer site.

Check out Updating Dell PowerEdge servers via bootable media / ISO where you can download bootable ISOs for specific server models, and get all your firmware upgraded in one fell swoop.

Be warned though, it takes quite some time to run though it all (125 update packages in this instance), but it sure beats installing manually or creating your own bootable ISO with the Dell Repository Manager or even using Dell Server Update Utility. Especially when it’s been a while since the server has been patched, the ones I used this on had not been touched since 2012…

Fire and forget via iDRAC, I can like that. Just make sure you update the iDRAC before you boot the ISO, or you might just get disconnected mid-way, and have to start over.

 

Configuring VSAN on a Dell PowerEdge VRTX

OriginalJPG

The Dell PowerEdge VRTX shared infrastructure platform is interesting, and I’ve been lucky enough to be able to borrow one from Dell for testing purposes.

One of the things I wanted to test, was if it was possible to run VMware VSAN on it, even if the Shared PERC8 RAID controller it comes with is not on the VMware VSAN HCL, nor does it provide a method to do passthrough to present raw disks directly to the hosts.

My test setup consists of:

  • 1 Dell PowerEdge VRTX
    • SPERC 8
      • 7 x 300GB 15k SAS drives
    • 2 x Dell PowerEdge M520 blades
      • 2 x 6 core Intel Xeon E5-2420 @ 1.90Ghz CPU
      • 32 GB RAM
      • 2 x 146GB 15 SAS drives

Both M520 blades were installed with  with ESXi 5.5, which is not a supported configuration from Dell. Dell has only certified ESXi 5.1 for use on the VRTX, but 5.5 seems to work just fine, with one caveat: Drivers for the SPERC8 controller is not included in the Dell customized image for ESXi 5.5. To get access to the volumes presented by the controller,  the 6.801.52.00 megaraid-sas driver needs to be installed after ESXi 5.5. 

Once that is installed, the volumes will appear as storage devices on the host.

Sadly the SPERC8 controller does not support passthrough for disksin the PowerEdge VRTX chassis, something VSAN wants (For details check VSAN and Storage Controllers). For testing purposes though, there is a way around it.

By creating several RAID0 Virtual Volumes on the controller, each one with only one disk in it, and assigning these disks to dedicated hosts in the chassis it is possible to present the disks to ESXi in a manner that VSAN can work with:

Dell PowerEdge VRTX VSAN Virtual Disks

A total of six RAID0 volumes have been created, three for each host.

Dell PowerEdge VRTX Disk Assignment

Each host gets granted exclusive access to three disks, resulting in them being presented as storage devices in vCenter

RawDisksVRTX

Since I don’t have any SSD drives in the chassis, something that is a requirement of VSAN, I also had to fool ESXi into believing one of the drives was in fact SSD. This is done by changing the claim rule for the given device. Find the device ID in the vSphere Client, and run the following commands to mark it as an SSD:

(Check KB2013188 Enabling the SSD option on SSD based disks/LUNs that are not detected as SSD by default for details.)

[cc lang=”bash” width=”100%” theme=”blackboard” nowrap=”0″]
~ # esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device naa.6b8ca3a0edc7a9001a961838899ee72a –option=enable_ssd
~ # esxcli storage core claiming reclaim -d naa.6b8ca3a0edc7a9001a961838899ee72a
[/cc]

Once that part is taken care of, the rest of the setup is done by following the VSAN setup guides found in the beta community.

VSAN Configured

Two Dell PowerEdge M520 nodes up and running, with VSAN, replicating between them inside a Dell PowerEdge VRTX chassis. Pretty nifty!

It is worth noting is that in this setup, the SPERC8 is a single point of failure, as it provides disk access to all of the nodes in the same cluster. This is not something you want to have in a production environment, but Dell does offer a second SPERC8 option for redundancy purposes in the PowerEdge VRTX.

I did not do any performance testing on this setup, mostly since I don’t have SSD’s available for it, nor does it make much sense to do that kind of testing on a total of 6 HDD spindles;
This is more a proof of concept setup than a production environment.

Installing Dell Equallogic Multipathing Extension Module (MEM) in vSphere 5.1

Dell offers a Multipathing Extension Module (MEM) for vSphere, and in this post I´ll highlight how to “manually” install it on a ESXi 5.1 host. I will not cover the network setup part of the equation, but rather go through the simple steps required to get the MEM installed on the hosts in question.

First of all, you need to download the MEM installation package. At the time of writing, the latest version is v1.1.2.292203, available from equallogic.com. One the archive file is aquired, unzip it and upload the dell-eql-mem-esx5-1.1.2.292203.zip file to a shared location available in your environment. For the example below, I have used a VMFS datastore that is available to all the hosts in this particular cluster.

Note: The host in question has already been put in maintenance mode, to make sure no VMs are actively using the storage paths while installing the module.

The VIB file, which resides inside the dell-eql-mem-esx5-1.1.2.292203.zip, file can be installed using several techniques; By using the vMA, vSphere Command-Line Interface (vSphere CLI), vSphere Update Manager or even by logging in to the hosts via a SSH connection, and in this case I opted for doing it through SSH.

The command required to install the MEM is as follows:

[cc lang=”bash” width=”100%” theme=”blackboard” nowrap=”0″]
esxcli software vib install –depot /vmfs/volumes/vmfsvol/dell/dell-eql-mem-esx5-1.1.2.292203.zip[/cc]

A completed installation looks like this:

[cc lang=”bash” width=”100%” theme=”blackboard” nowrap=”0″]

login as: root
Using keyboard-interactive authentication.
Password:
The time and date of this login have been sent to the system logs.

VMware offers supported, powerful system administration tools. Please
see www.vmware.com/go/sysadmintools for details.

The ESXi Shell can be disabled by an administrative user. See the
vSphere Security documentation for more information.
~ # esxcli software vib install –depot /vmfs/volumes/vmfsvol/dell/dell-eql-mem-esx5-1.1.2.292203.zip
Installation Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: Dell_bootbank_dell-eql-host-connection-mgr_1.1.1-268843, Dell_bootbank_dell-eql-hostprofile_1.1.0-212190, Dell_bootbank_dell-eql-routed-psp_1.1.1-262227
VIBs Removed:
VIBs Skipped:
~ #[/cc]

I then restart the hosts process, to make sure that the multipath module is activated.

[cc lang=”bash” width=”100%” theme=”blackboard” nowrap=”0″]
~ # /etc/init.d/hostd restart
watchdog-hostd: Terminating watchdog process with PID 9592
hostd stopped.
hostd started.
~ #[/cc]

Finally, a quick check to see if the new Equallogic namespace is available, and that it is gathering statistics, i.e. being used:

[cc lang=”bash” width=”100%” theme=”blackboard” nowrap=”0″]
~ # esxcli equallogic stat summary
DeviceId VolumeName PathCount Reads Writes KBRead KBWritten
——————————– ———- ——— —– —— —— ———
6090A098E0DC5D9F71E6940292F8569C vmvolume 2 2573 30 20429 14
6090A098D06C5A31CEDE44CC17CBF14B test2t 2 651 30 13028 14
6090A098D06C4AF067EDD4C904C6A453 vmvolume3 2 642 30 10592 14
6090A098C08D5E928EE634938F42605B vmvolume1 2 1834 30 20023 14
~ #[/cc]

 

 

pre-mem-install

Screenshots displaying the ESXi host path policy before:

 

 

 

 

 

post-mem-install

and after installing the Dell Equallogic MEM: