Quick and Dirty ESXi 5.1U1 Upgrade

Now that VMware ESXi 5.1 Update 1 has been released I decided to do a quick and dirty upgrade of my home installation. I refuse to call it a lab these days, since it´s one singular host and all it does it contain my home domain controller…

Anyway, the following procedure upgraded the host from 5.1b to 5.1U1, by downloading the upgrade directly from VMware and installing it. Make sure the host is in maintenance mode before attempting this procedure.

Check Correlating vCenter Server and ESXi/ESX host build numbers to update levels (1014508) to determine which of the update files in the repository to download.

SSH into your host, and issue the following command:

1
~ # esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-5.1.0-20130402001-standard

This will initiate a download of the new ESXi version and install the update automatically, beware that it will not show any progress bar or indication while the file is being downloaded from the VMware repository.

The command above will only work, as Erik pointed out in the comments, if you have allowed the httpClient outgoing access on the ESXi server. If not, you can enable it by running the following command on the host (or by using the vSphere Client):

1
~ # esxcli network firewall ruleset set -e true -r httpClient

You can of course monitor your download with the vSphere Client (web or otherwise), to make sure nothing has stopped.

QuickDirtyUpgrade01

You can also monitor the upgrade process by looking at the VMware logs Monitoring the ESXi Upgrade Process as Paul suggested in the comments.

Once it´s done, it will look like this:

1
2
3
4
5
6
7
Update Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed: VMware_bootbank_esx-base_5.1.0-1.12.1065491, VMware_bootbank_esx-xserver_5.1.0-0.11.1063671, VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.510.1.12.1065491, VMware_bootbank_misc-drivers_5.1.0-1.12.1065491, VMware_bootbank_net-bnx2_2.0.15g.v50.11-7vmw.510.1.12.1065491, VMware_bootbank_net-bnx2x_1.61.15.v50.3-1vmw.510.0.11.1063671, VMware_bootbank_net-e1000e_1.1.2-3vmw.510.1.12.1065491, VMware_bootbank_net-igb_2.1.11.1-3vmw.510.1.12.1065491, VMware_bootbank_net-ixgbe_3.7.13.6iov-10vmw.510.1.12.1065491, VMware_bootbank_net-tg3_3.123b.v50.1-1vmw.510.1.12.1065491, VMware_bootbank_scsi-megaraid-sas_5.34-4vmw.510.1.12.1065491, VMware_locker_tools-light_5.1.0-1.12.1065491
VIBs Removed: VMware_bootbank_esx-base_5.1.0-0.10.1021289, VMware_bootbank_esx-xserver_5.1.0-0.0.799733, VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.510.0.0.799733, VMware_bootbank_misc-drivers_5.1.0-0.0.799733, VMware_bootbank_net-bnx2_2.0.15g.v50.11-7vmw.510.0.0.799733, VMware_bootbank_net-bnx2x_1.61.15.v50.3-1vmw.510.0.0.799733, VMware_bootbank_net-e1000e_1.1.2-3vmw.510.0.0.799733, VMware_bootbank_net-igb_2.1.11.1-3vmw.510.0.0.799733, VMware_bootbank_net-ixgbe_3.7.13.6iov-10vmw.510.0.0.799733, VMware_bootbank_net-tg3_3.110h.v50.4-4vmw.510.0.0.799733, VMware_bootbank_scsi-megaraid-sas_5.34-4vmw.510.0.0.799733, VMware_locker_tools-light_5.1.0-0.9.914609
VIBs Skipped: VMware_bootbank_ata-pata-amd_0.3.10-3vmw.510.0.0.799733, VMware_bootbank_ata-pata-atiixp_0.4.6-4vmw.510.0.0.799733, VMware_bootbank_ata-pata-cmd64x_0.2.5-3vmw.510.0.0.799733, VMware_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.510.0.0.799733, VMware_bootbank_ata-pata-pdc2027x_1.0-3vmw.510.0.0.799733, VMware_bootbank_ata-pata-serverworks_0.4.3-3vmw.510.0.0.799733, VMware_bootbank_ata-pata-sil680_0.4.8-3vmw.510.0.0.799733, VMware_bootbank_ata-pata-via_0.3.3-2vmw.510.0.0.799733, VMware_bootbank_block-cciss_3.6.14-10vmw.510.0.0.799733, VMware_bootbank_ehci-ehci-hcd_1.0-3vmw.510.0.0.799733, VMware_bootbank_esx-dvfilter-generic-fastpath_5.1.0-0.0.799733, VMware_bootbank_esx-tboot_5.1.0-0.0.799733, VMware_bootbank_esx-xlibs_5.1.0-0.0.799733, VMware_bootbank_ima-qla4xxx_2.01.31-1vmw.510.0.0.799733, VMware_bootbank_ipmi-ipmi-devintf_39.1-4vmw.510.0.0.799733, VMware_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.510.0.0.799733, VMware_bootbank_misc-cnic-register_1.1-1vmw.510.0.0.799733, VMware_bootbank_net-be2net_4.1.255.11-1vmw.510.0.0.799733, VMware_bootbank_net-cnic_1.10.2j.v50.7-3vmw.510.0.0.799733, VMware_bootbank_net-e1000_8.0.3.1-2vmw.510.0.0.799733, VMware_bootbank_net-enic_1.4.2.15a-1vmw.510.0.0.799733, VMware_bootbank_net-forcedeth_0.61-2vmw.510.0.0.799733, VMware_bootbank_net-nx-nic_4.0.558-3vmw.510.0.0.799733, VMware_bootbank_net-r8168_8.013.00-3vmw.510.0.0.799733, VMware_bootbank_net-r8169_6.011.00-2vmw.510.0.0.799733, VMware_bootbank_net-s2io_2.1.4.13427-3vmw.510.0.0.799733, VMware_bootbank_net-sky2_1.20-2vmw.510.0.0.799733, VMware_bootbank_net-vmxnet3_1.1.3.0-3vmw.510.0.0.799733, VMware_bootbank_ohci-usb-ohci_1.0-3vmw.510.0.0.799733, VMware_bootbank_sata-ahci_3.0-13vmw.510.0.0.799733, VMware_bootbank_sata-ata-piix_2.12-6vmw.510.0.0.799733, VMware_bootbank_sata-sata-nv_3.5-4vmw.510.0.0.799733, VMware_bootbank_sata-sata-promise_2.12-3vmw.510.0.0.799733, VMware_bootbank_sata-sata-sil24_1.1-1vmw.510.0.0.799733, VMware_bootbank_sata-sata-sil_2.3-4vmw.510.0.0.799733, VMware_bootbank_sata-sata-svw_2.3-3vmw.510.0.0.799733, VMware_bootbank_scsi-aacraid_1.1.5.1-9vmw.510.0.0.799733, VMware_bootbank_scsi-adp94xx_1.0.8.12-6vmw.510.0.0.799733, VMware_bootbank_scsi-aic79xx_3.1-5vmw.510.0.0.799733, VMware_bootbank_scsi-bnx2i_1.9.1d.v50.1-5vmw.510.0.0.799733, VMware_bootbank_scsi-fnic_1.5.0.3-1vmw.510.0.0.799733, VMware_bootbank_scsi-hpsa_5.0.0-21vmw.510.0.0.799733, VMware_bootbank_scsi-ips_7.12.05-4vmw.510.0.0.799733, VMware_bootbank_scsi-lpfc820_8.2.3.1-127vmw.510.0.0.799733, VMware_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.510.0.0.799733, VMware_bootbank_scsi-megaraid2_2.00.4-9vmw.510.0.0.799733, VMware_bootbank_scsi-mpt2sas_10.00.00.00-5vmw.510.0.0.799733, VMware_bootbank_scsi-mptsas_4.23.01.00-6vmw.510.0.0.799733, VMware_bootbank_scsi-mptspi_4.23.01.00-6vmw.510.0.0.799733, VMware_bootbank_scsi-qla2xxx_902.k1.1-9vmw.510.0.0.799733, VMware_bootbank_scsi-qla4xxx_5.01.03.2-4vmw.510.0.0.799733, VMware_bootbank_scsi-rste_2.0.2.0088-1vmw.510.0.0.799733, VMware_bootbank_uhci-usb-uhci_1.0-3vmw.510.0.0.799733
~ #

Reboot your host, and behold the glory of the new ESXi 5.1 Update 1 version!

QuickDirtyUpgrade02

If you want to know how to figure out how to find the correct URL and file, check out William Lam´s excellent A Pretty Cool Method of Upgrading to ESXi 5.1 post, which provides more details.

I told you it was quick and dirty, didn`t I?

Adding a secondary NIC to the vCenter 5.1 Appliance (VCSA)

While building my lab environment, I ran into a situation where I wanted to have a completely sealed off networking segment that had no outside access.

This is a trivial task on it`s own, just create a vSwitch with no physical NICs attached to it, and then connect the VMs to it. The VMs will then have interconnectivity, but no outside network access at all.

In this particular case, I was setting up a couple of nested ESXi servers that I wanted to connect to the “outside” vCenter Appliance (VCSA). This VCSA instance was not connected to the internal-only vSwitch, but rather to the existing vSwitch that as local network access.

Naturally, the solution would be to add a secondary NIC to the VCSA, and connect that to the internal-only vSwitch.

It turns out that adding a secondary NIC to a VCSA instance, isn`t as straight-forward as you might think. Sure, adding a new NIC is no problem through either the vSphere Client, or the vSphere Web Client, but getting the NIC configured inside of VCSA is another matter.

If you add a secondary NIC, it will turn up in the VCSA management web page, but you will not be able to save the configuration since the required configuration files for eth1 is missing.

In order to rectify this, I performed the following steps:

  1. Connect to the VCSA via SSH (default username and password is root/vmware)
  2. Copy /etc/sysconfig/networking/devices/ifcfg-eth0 to /etc/sysconfig/networking/devices/ifcfg-eth1
  3. Edit ifcfg-eth1 and replace the networking information with your values, here is how mine looks:
    1
    2
    3
    4
    5
    6
    7
    8
    DEVICE=eth1
    BOOTPROTO='static'
    STARTMODE='auto'
    TYPE=Ethernet
    USERCONTROL='no'
    IPADDR='172.16.1.52'
    NETMASK='255.255.255.0'
    BROADCAST='172.16.1.255'
  4. Create a symlink for this file in /etc/sysconfig/network
    1
    ln -s /etc/sysconfig/networking/devices/ifcfg-eth1 /etc/sysconfig/network/ifcfg-eth1
  5. Restart the networking service to activate the new setup:
    1
    service network restart

    Check the VCSA web management interface to verify that the new settings are active

Client 2013-04-25 10-54-37

By adding a secondary NIC, configuring it and connecting it to the isolated vSwitch I was now able to add my sequestered nested ESXi hosts to my existing VCSA installation.

 

Client 2013-04-25 13-07-01

There may be several reasons for a setup like this, perhaps you want your VCSA to be available on a management VLAN but reach ESXi hosts on another VLAN without having routing in place between the segmented networks, or you just want to play around with it like I am in this lab environment.

Disclaimer:

Is this supported by VMware? Probably not, but I simply don`t know. Caveat emptor, and all that jazz.

Update February 2016:

This post is written with VCSA5.x in mind, and is not tested on VCSA 6.x. William Lam has posted Caveats when multi-homing the vCenter Server Appliance 6.x w/multiple vNICs with information on what caveats exist if you are looking to do this with the newer v6.x infrastructure.

Installing Dell Equallogic Multipathing Extension Module (MEM) in vSphere 5.1

Dell offers a Multipathing Extension Module (MEM) for vSphere, and in this post I´ll highlight how to “manually” install it on a ESXi 5.1 host. I will not cover the network setup part of the equation, but rather go through the simple steps required to get the MEM installed on the hosts in question.

First of all, you need to download the MEM installation package. At the time of writing, the latest version is v1.1.2.292203, available from equallogic.com. One the archive file is aquired, unzip it and upload the dell-eql-mem-esx5-1.1.2.292203.zip file to a shared location available in your environment. For the example below, I have used a VMFS datastore that is available to all the hosts in this particular cluster.

Note: The host in question has already been put in maintenance mode, to make sure no VMs are actively using the storage paths while installing the module.

The VIB file, which resides inside the dell-eql-mem-esx5-1.1.2.292203.zip, file can be installed using several techniques; By using the vMA, vSphere Command-Line Interface (vSphere CLI), vSphere Update Manager or even by logging in to the hosts via a SSH connection, and in this case I opted for doing it through SSH.

The command required to install the MEM is as follows:

1
esxcli software vib install --depot /vmfs/volumes/vmfsvol/dell/dell-eql-mem-esx5-1.1.2.292203.zip

A completed installation looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
login as: root
Using keyboard-interactive authentication.
Password:
The time and date of this login have been sent to the system logs.

VMware offers supported, powerful system administration tools. Please
see www.vmware.com/go/sysadmintools for details.

The ESXi Shell can be disabled by an administrative user. See the
vSphere Security documentation for more information.
~ # esxcli software vib install --depot /vmfs/volumes/vmfsvol/dell/dell-eql-mem-esx5-1.1.2.292203.zip
Installation Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: Dell_bootbank_dell-eql-host-connection-mgr_1.1.1-268843, Dell_bootbank_dell-eql-hostprofile_1.1.0-212190, Dell_bootbank_dell-eql-routed-psp_1.1.1-262227
VIBs Removed:
VIBs Skipped:
~ #

I then restart the hosts process, to make sure that the multipath module is activated.

1
2
3
4
5
~ # /etc/init.d/hostd restart
watchdog-hostd: Terminating watchdog process with PID 9592
hostd stopped.
hostd started.
~ #

Finally, a quick check to see if the new Equallogic namespace is available, and that it is gathering statistics, i.e. being used:

1
2
3
4
5
6
7
8
~ # esxcli equallogic stat summary
DeviceId VolumeName PathCount Reads Writes KBRead KBWritten
-------------------------------- ---------- --------- ----- ------ ------ ---------
6090A098E0DC5D9F71E6940292F8569C vmvolume 2 2573 30 20429 14
6090A098D06C5A31CEDE44CC17CBF14B test2t 2 651 30 13028 14
6090A098D06C4AF067EDD4C904C6A453 vmvolume3 2 642 30 10592 14
6090A098C08D5E928EE634938F42605B vmvolume1 2 1834 30 20023 14
~ #

 

 

pre-mem-install

Screenshots displaying the ESXi host path policy before:

 

 

 

 

 

post-mem-install

and after installing the Dell Equallogic MEM: