Exporting vCenter Events with PowerCLI

One of my clients has recently been having issues with their storage solution, and wanted to export the events from vCenter that show storage performance degradation, to aid in troubleshooting with the vendor.

For some reason, and I have yet to confirm that this is a bug with the vCenter 5.0 appliance or the vCenter Desktop Client, when an event export is done, the storage related events are not exported with the rest of the events.

Thankfully we were able to get the event details exported using the following PowerCLI oneliner:

[cc lang=”powershell” width=”95%” theme=”blackboard” nowrap=”0″]
PowerCLI C:\log> Get-VIEvent -Start 19/11/2012 -Finish 30/11/2012 | Export-Csv “events.csv” -NoTypeInformation -UseCulture

This generates an events.csv file in the current directory, containing all the events in the given timeframe, from the vCenter it is connected to. And yes, the storage related events missing from the vCenter Desktop Client export are indeed included in the file generated by the Get-VIEvent cmdlet.

Once again, PowerCLI to the rescue!

vSphere Web or Desktop Client – Who´s Your Daddy?

At the moment, VMware vSphere offers two different management clients, the vSphere Web Client and the vSphere Desktop Client.

The feature comparison table looks like this:
(Copied from “Which vSphere client should I use and when?“)

vSphere Web Client Only vSphere Desktop Client Only
  • vCenter Single Sign-On
    • Authentication
    • Administration
  • Navigation with Inventory Lists
  • Inventory Tagging
  • Work In Progress (Pause)
  • Pre-emptive Searching
  • Save Searches
  • Enhanced read performance utilizing the Inventory Service
  • vSphere Replication (not SRM)
  • Virtual Infrastructure Navigator
  • Enhanced vMotion (no shared storage)
  • Integration with vCenter Orchestrator (vCO) Workflows (Extended Menus)
  • Virtual Distributed Switch (vDS)
    • Health Check
    • Export/Restore Configuration
    • Diagram filtering
  • Log Browser Plugin
  • vSphere Data Protection (VDP)
  • VMware Desktop Plug-ins (VUM, SRM, etc)
  • 3rd Party Desktop Plugins (various)
  • VXLAN Networking
  • Ability to change Guest OS on an existing virtual machine
  • vCenter Server Maps
  • Create and edit custom attributes
  • Connect direct to a vSphere host
  • Inflate thin disk option found in the Datastore Browser


In plain words, this means that all new features in vSphere 5.1 are Web Client only, and older “legacy” features and plugin integrations are Desktop Client only at this point.

This will change over time as both VMware products, like VUM and SRM, are moved into a new home in the Web Client and when 3rd party vendors integrate their plugins with the new client.

There is no doubt that the vSphere Web Client is where the future lives, but in the interim vAdmins are forced to utilize both to be able to use all the available functionality and obviously this is far from ideal.

I´m sure VMware will get where they want with the vSphere Web Client in the end, and changing platform like this is a big task, especially when you consider that third parties need to be on their toes and upgrade their integrations as well.

Having two clients for management is not fun, but it does beat having no management at all.

So in short, your daddy? They both are. While they might be separated, the divorce has not been finalized just yet.

Testing VMware vSphere 5 Swap to Host Cache

A little while ago I fitted a small 64GB SSD disk to my HP MicroServer just to have a quick look at the new vSphere 5 feature Swap to Host Cache, where vSphere 5 reclaims memory by storing the swapped out pages in the host cache on a solid-state drive. Naturally, this is a lot faster than swapping to non-SSD storage, but you will still see a performance hit when this happens. For more details on Swap to Host Cache, have a look at Swap to host cache aka swap to SSD? by Duncan Epping.

Now, in my miniscule home lab setting it’s somewhat hard to get some real tangible performance metrics, so my little experiment is non-scientific and only meant to illustrate how swap to host cache gone wild would look in a real world environment.

After installing the SSD drive, and configuring Swap to Host Cache, I created two VMs ingeniously called hostcacheA and hostcacheB. Both were configured with 14GB of memory, which should nicely overload my host that has a whopping 8GB of memory in total.

Now, with memory features like ballooning, transparent page sharing, and memory compression I needed to make sure that the actual memory was used, and in addition it had to contain different datasets to make sure that the host cache actually kicked in.

To make sure of this, I downloaded the latest ISO version of Memtest86+ and connected it to the empty VMs.

When starting the VMs, they immediately started testing their available memory and sure enough, they started eating into the host cache.

As you can see from the screenshot below, the longer the memtest ran the more host cache was utilized.
Bonus points for figuring out when the test VMs were shut down…

So there it is, performance graphs showing that the host cache is indeed kicking in and getting a run for it’s money. Since this was a non-scientific experiment, I don’t have any real performance counters or metrics to base any sort of conclusion on. All I was after was to see if it came alive, and clearly it did.