Unsupported Network Configurations in Virtual Appliances

My recent experience with setting up vCenter Operations Manager on a standalone ESXi host, and the always excellent William Lam´s post Automating VCSA Network Configurations For Greenfield Deployments got me thinking.

There are several other appliances out there that require deployment to a vCenter, to be able to configure the networking options and not just default to DHCP. In many, and perhaps even most, cases you can work around that by running the vami_set_network command to change from DHCP to STATIC network configurations.

All of that is fine and dandy, and pretty straight forward, but there is one smallish caveat;

You need root access to be able to reconfigure the networking.

Without it, you will see error messages such as these (Shortened for abbreviation):

@localhost:~/opt/vmware/share/vami/vami_set_network eth0 STATICV4 192.168.5.98 255.255.255.0 192.168.5.1
/sbin/ifdown: line 233: /dev/.sysconfig/network/config-eth0: Permission denied
IOError: [Errno 13] Permission denied: '/opt/vmware/etc/vami/vami_ovf_info.xml'
Unable to set the network parameters

So, what if you don´t know the appliance root password?

Most virtual appliances are Linux based, and in this particular case the flavor used was SUSE Enterprise Linux 11.

To reset the root password on a grub based Linux appliance, like SUSE, follow the recipe below:

Note: As William Lam pointed out this procedure only works if there no grub password set, if that´s the case download a LiveCD, mount the filesystem and run the password change from there. If the filesystem in the appliance is encrypted, well, then all bets are off.

  1. In the grub menu select the kernel you want to boot and press tab to shift focus to “Boot Options”
  2. Now type init =/bin/bash and press Enter to continue.SUSE-PW-RESET-01
  3. You will see a prompt that looks like  (none):/ # in the terminal.
  4. Run the passwd command in terminal to change the password for root.
    (none):/ # passwd
    New Password:
    Reenter New Password:
    Password changed.

Reboot the appliance, let it boot up normally, and you should now be able to log on as root, with your newly configured password, and run the vami_set_network command to configure static IP adressing.

localhost:~ # /opt/vmware/share/vami/vami_set_network eth0 STATICV4 192.168.5.19 255.255.255.0 192.168.5.1
eth0 device: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)
eth0 device: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)
Network parameters successfully changed to requested values
localhost:~ #

Do yet another reboot, and you should be up and running with a static IP configuration on an appliance deployed without the advanced OVF/OVA properties normally required for that kind of deployment.

Note: This procedure is more than likely NOT supported by your vendor, and changing the root password might have other consequences for the appliance. If the vendor does not supply the root password i their documentation, there is likely to be a reason for that, but the procedure above shows that not supplying it does not actually prevent anyone from changing it. USE AT OWN RISK.

Using rsync to Distribute Patches to a Remote vMA

I recently posted Using vMA as a local vSphere Patch Repository, where I outlined how you can use your vMA instances as local file repositories for updates.

This post is a continuation of that concept, but this time I’ll take it a step further and utilize rsync to make sure my vMA instances all contain the same set of patches. Rsync is great for this, as it handles fast incremental file transfers, which is a real time and bandwidth saver in my particular scenario. So, the premise is that you have one central vMA instance, and one or more remote vMA instances that should pull updates from the centrally located one.

Installing rsync in vMA

Sadly, rsync isn’t included in vMA by default. To get it installed, we need to edit some files inside of vMA. Since vMA is CentOS based, this means configuring yum repositories, and thankfully the brilliant William Lam over at virtuallyGhetto has already done the hard work for us. In his post named Automate Update Manager Operations using vSphere SDK for Perl + VIX + PowerCLI + PowerCLI VUM William explains which files to edit to create a valid repository configuration for installation of official packages directly from CentOS.

Warning: Do this on your own risk, I have not checked but I think this will fall under the “unsupported” tab at VMware.

Configuring YUM in vMA

These instructions are pretty much copied from Williams post, but added here for context:

Creating the repository configuration file

To create the file, in the correct directory, run the followinf command:

[[email protected] /]$ sudo vi /etc/yum.repos.d/centos-base.repo
Password:

Add the following lines to the repository file:

[base]
name=CentOS-5 - Base
baseurl=http://mirror.centos.org/centos/5/os/x86_64/
gpgcheck=1
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5
 
[update]
name=CentOS-5 - Updates
baseurl=http://mirror.centos.org/centos/5/updates/x86_64/
gpgcheck=1
gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5

Exit the vi editor by hitting esc and entering :wq and hit enter. That saves the file, and exits the editor.

Installing rsync via yum

Now comes the easy part, actually installing rsync inside vMA. All you have to do, is to enter the following command:

[[email protected] /]$ sudo yum -y install rsync

The installation starts, and you should see output similar to the following:

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Setting up Install Process
Parsing package install arguments
Resolving Dependencies
--> Running transaction check
---> Package rsync.x86_64 0:2.6.8-3.1 set to be updated
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================
 Package           Arch               Version                Repository        Size
====================================================================================
Installing:
 rsync             x86_64             2.6.8-3.1              base             235 k

Transaction Summary
====================================================================================
Install      1 Package(s)
Update       0 Package(s)
Remove       0 Package(s)

Total download size: 235 k
Downloading Packages:
rsync-2.6.8-3.1.x86_64.rp 100% |=========================| 235 kB    00:01
Running rpm_check_debug
Running Transaction Test
Finished Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing     : rsync                                             [1/1]

Installed: rsync.x86_64 0:2.6.8-3.1
Complete!
[[email protected] /]$

And there it is, rsync installed inside vMA!

Configuring rsync to fetch upgrades from central vMA

Now that we have rsync installed inside vMA, we need to configure it to fetch the updates from a central vMA instance. Rsync needs to be installed in both ends of the pipe, so if you haven’t already done so, configure your “master vMA” the same way as mentioned above.

Now that “both ends” of the pipe has rsync installed, we can run it from “client vMA” to pull down all the files currently in the repository on the “master vMA”.

[[email protected] /]$sudo rsync -r [email protected]:/var/www/html/repo/* /var/www.html/repo
The authenticity of host '192.168.5.57 (192.168.5.57)' can't be established.
RSA key fingerprint is 3f:af:7c:53:5a:15:47:56:b8:25:77:79:14:b2:f5:2f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.5.57' (RSA) to the list of known hosts.
[email protected]'s password:
[[email protected] /]$

The command runs for a while, and when it finished you should see that the current contents of the “master vMA” repository is now located in the “client vMA” repository as well:

[[email protected] repo]$ ls -la
total 210988
drwxr-xr-x 2 root root      4096 Mar  8 18:51 .
drwxr-xr-x 4 root root      4096 Mar  8 18:50 ..
-rw-r--r-- 1 root root         0 Mar  8 18:51 testfile
-rw-r--r-- 1 root root         0 Mar  8 18:51 testfile1
-rw-r--r-- 1 root root         0 Mar  8 18:51 testfile2
-rw-r--r-- 1 root root         0 Mar  8 18:51 testfile3
-rw-r--r-- 1 root root 215820281 Jan 27 19:15 update-from-esxi4.1-4.1_update01.zip
[[email protected] repo]$

Conclusion

There is a lot more you can do with rsync, like replication files both ways, controlling bandwidth usage, using ssh keys to avoid username/password prompts, something that is required if you want to fully automated this process. I will not cover that, at least not right now, so head over to the rsync site to read up on the documentation for more advanced use cases.

Even if I’ve barely touched the features rsync provides, it is clear that this is a way for admins to centrally manage distribution of vSphere patches to remote locations, even if the bandwidth is low and the latency high. Rsync provides us with ways to overcome the patching issues that you might see in poorly networked environments, and it can certainly help vAdmins keeping their environments patched and current, and that has to be a good thing™

Using vMA as a local vSphere Patch Repository

I like using http as the transport protocol when patching my vSphere hosts. It’s easy to use and in most cases immediately available over most networks. Since I want to use http as the transport, we need to make vMA work as a http server.

Starting Apache inside vMA

Luckily, the Apache http daemon is installed, by default, in vMA and to utilize it all you have to do is to start it!

Log on to vMA with your favorite SSH client and run the following command to start the Apache HTTP Daemon:

[[email protected] /]$ sudo service httpd start
Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for  ServerName                   [ OK ]
[[email protected] /]$

Never mind the error message it displays, for our purposes that’s not an issue and we can safely ignore it.

By default the files served by Apache is located in /var/www/html, so we’ll head over there to create a new directory

[[email protected] /]$ cd /var/www/html/
[[email protected] html]$ sudo mkdir repo

We’ve now created the repo directory inside the Apache docroot. Now we need to add some patches to that directory, to make it available for the vihostupdate or esxupdate command we can use to patch our hosts.

In my lab, I used the update-from-esxi4.1-4.1_update01 patch bundle from vmware.com

To download the patch into the new repo directory created above, run the following commands:

[[email protected] html]$ cd /var/www/html/repo/
[[email protected] repo]$ sudo wget https://hostupdate.vmware.com/software/VUM/OFFLINE/release-260-20110127-912579/update-from-esxi4.1-4.1_update01.zip
Password:
--15:34:32--  https://hostupdate.vmware.com/software/VUM/OFFLINE/release-260-20110127-912579/update-from-esxi4.1-4.1_update01.zip
	Resolving hostupdate.vmware.com... 88.221.164.7
	Connecting to hostupdate.vmware.com|88.221.164.7|:443... connected.
	HTTP request sent, awaiting response... 200 OK
	Length: 215820281 (206M) [application/zip]
	Saving to: `update-from-esxi4.1-4.1_update01.zip'
100%[===============================================================================================================>] 

215,820,281  919K/s   in 3m 54s
15:38:26 (901 KB/s) - `update-from-esxi4.1-4.1_update01.zip' saved [215820281/215820281]

This downloads the patch bundle, using the wget command, to the current directory.

Now, to make sure your downloaded patch bundle is available via the web server, open http://vMA-IP/repo/ and you should see the directory contents listed. Your browser should display something similar to this:

Before patching a host, power off or migrate any virtual machines that are running on the host and place the host into maintenance mode.

Scan host for update compatibility

[[email protected] repo]$ vihostupdate --server 10.0.100.20 --scan --bundle http://10.0.101.14/repo/update-from-esxi4.1-4.1_update01.zip
Enter username: root
Enter password:
The bulletins which apply to but are not yet installed on this ESX host are listed.

---------Bulletin ID---------   ----------------Summary-----------------
ESXi410-201101201-SG            Updates the ESXi 4.1 firmware
ESXi410-201101202-UG            Updates the ESXi  4.1 VMware Tools
ESXi410-201101223-UG            3w-9xxx: scsi driver for VMware ESXi
ESXi410-201101224-UG            vxge: net driver for VMware ESXi
ESXi410-Update01                VMware ESXi 4.1 Complete Update 1

Install updates to host

[[email protected] repo]$ vihostupdate --server 10.0.100.20 --install --bundle http://10.0.101.14/repo/update-from-esxi4.1-4.1_update01.zip
Enter username: root
Enter password:
Please wait patch installation is in progress ...
The update completed successfully, but the system needs to be rebooted for the changes to be effective.
[[email protected] repo]$

While the update runs, you can also follow it’s progress in the vSphere Client


When the patch has completed, and the host has been rebooted you can run the scan command again to make sure all of the patches are installed and no longer required.

Management

Now, downloading the patches this way for each vMA instance you have (especially if you have several remote sites) is not very effective. Sure, you could place a central repository at a central site and use that as your central update repository. In that scenario you might as well just use the VMware vCenter Update Manager and not have to manage your updates via vMA at all.

In some cases though, you would want to have the remote hosts install their updates from a local repository. One such case might be if you have remote locations with low bandwidth/high latency links that you don’t want to stress with the update downloads.

I’m investigating a possible solution for that as well, and I’ll post that as soon as I have working proof of concept up and running.

Another thing to note, is that when you restart vMA the http service will be stopped again. If you want it to autostart each time vMA boots, issue the following command

[[email protected] repo]$ sudo ntsysv
Password:

This brings up a screen where you can choose which daemons should start at boot time inside of vMA.

Find httpd, select it and hit the OK button. The next time vMA boots, the Apache web server starts with it.