All posts by Christian Mohn

Written by . Christian is the owner of vNinja.net and a vSoup.net Virtualization Podcast co-host.

Awesome: Nordic VMUG Conference 2014

Last year I was lucky enough to get to travel to Copenhagen and visit the Nordic VMUG conference. Sadly it doesn’t seem like I’ll be able to make it this year, but don’t let that stop you! While we in Norway are still trying to get our local VMUG up and running, more news on that in a very short while, the danish VMUG is really the driving force and the leading star for the rest of us in the nordics.

Last years conference was awesome, and the 2014 edition looks no different. A quick glance at the agenda shows a bunch of familiar names:

  • Frank Denneman, PernixData
  • Kamau Wanguhu, VMware
  • Duncan Epping, VMware
  • Raymon Epping, Nutanix
  • Chris Wahl, Ahead
  • Paudie O’Riordan, VMware
  • Cormac Hogan, VMware
  • Shawn Bass, VMware
  • Hugo Phan, Atlantis Computing

The topics range from NSX, VMware vSphere futures, various storage topics, backup, security and flash acceleration. All in all, the line up and topics look great, seems that once again VMUG.dk is creating their own mini-VMworld in Copenhagen. It’s once again held in the Bella Center, the same venue that hosted VMworld EMEA back in 2010, when everyone complained about the cold weather and high prices.

So, if you can, go register now.

I can guarantee you that you will not regret it, and I really wish I could join in this year as well.Perhaps next year, and I might even have a speaking slot then, who knows…

 

 

From the labs: Building an Ubuntu 14.04 Appliance with VMware Studio 2.6

VMware Studio 2.6 was released way back in March 2012,  and surprisingly there seems to be no new update in sight. While VMware Studio technically still works, even with newer versions of ESXi and vCenter, the supported operating systems for the appliances it can build is somewhat outdated:

  • Red Hat Enterprise Linux 5.5/6.0
  • SUSE Linux Enterprise Server 10.2/11.1
  • Ubuntu 8.04.4/10.04.1
  • Windows Server 2003 R2 / 2008 R2 w/SP1

The Problem:

For a yet to be announced project we are working on internally at EVRY, we needed to build an appliance that was based on newer software packages and development tools. Recent events like heartbleed and shellshock also highlight the need to build new appliances on new and supported distributions.

Attempts at upgrading an existing Ubuntu 10.04.1 appliance to 14.04 failed miserably, due to architectural changes between the Ubuntu versions and how Virtual Appliance Management Infrastructure (VAMI) is installed by VMware Studio and in the end we were pretty much left to two options:

  1. Build the appliance from scratch, and lose VAMI, which was one of the primary reasons for building the appliance with VMware Studio in the first place.
  2. Find a way to build the appliance with Ubuntu 14.04, with VMware Studio.

Option 1 felt a bit like giving up, and option 2, well that was a challenge we couldn’t just walk away from.

The Solution:

Thanks to the brilliant mind of my coworker Espen Ødegaard, we were able to come up with a set of solutions that does the trick.

First we added a new profile to VMware Studio connecting to the VMware Studio VM via SSH, and by issuing the following command:

studiocli -newos --osdesc "Ubuntu14.04” --profile /opt/vmware/etc/build/templates/ubuntu/10/041/build_profile.xml

Basically this creates a new OS template in VMware Studio by copying the existing Ubuntu 10.04.1 profile.

Next edit line 462 in /opt/vmware/etc/build/templates/Ubuntu14.04/Ubuntu14.04.xsl and change scd0 to sr0.

if ! mount -t iso9660 -r /dev/scd0 /target/${cdrom_dir} ; then \
if ! mount -t iso9660 -r /dev/hda /target/${cdrom_dir} ; then \
cdrom_mounted=0; \
fi ; \

This fixes a problem with the appliance not being able to mount the Ubuntu installation iso when it is built. Place the Ubuntu 14.04 ISO in /opt/vmware/www/ISV/ISO, and create a new Build profile using the VMware Studio Web Interface.

Now, before an appliance can be built, we need to fix some other problems that prevent VAMI from starting up, and preventing login to the VAMI web interface. First of, make sure that libncurses5 is added as package under Application -> List of packages from OS install media. Next, add the following to the first boot script under OS->Boot Customization, we can move around those problems.

# Create symlinks required for Ubuntu 14.04 and VAMI

# Copy and symlink libncurses to the location VAMI looks for them

cp /lib/i386-linux-gnu/libncurses.so.5.9 /opt/vmware/lib/
rm /opt/vmware/lib/libncurses.so.5
ln -s /opt/vmware/lib/libncurses.so.5.9 /opt/vmware/lib/libncurses.so.5

# Symlink PAM libraries in order for them to work with VAMI
# This "unbreaks" authentication in the web interface
mkdir /lib/security
ln -s /lib/i386-linux-gnu/security/* /lib/security/

For details on how all of this works, and how to create build profiles, check the official VMware Studio documentation.

The build process should now complete successfully, and you should have an Ubuntu 14.04 based appliance built with VMware Studio!
vStealth1
vStealth2

Note that all of this is completely unsupported by VMware, and you are pretty much on your own. Hopefully there will be a new version of VMware Studio available soon, and we won’t have to rely on unsupported hacks to get it working with newer operating systems.

Opinion: Is Hyperconverged the be-all, end-all? No.

First off, this is not meant to be a post negating the value of current hyperconverged solutions available in the market. I think hyperconverged has it’s place, and for many use cases it makes perfect sense to go down that route. But the idea that everyone should go hyperconverged and all data should be placed on local drives, even if made redundant inside the chassis and even between chassis, is to be blunt, a bit silly.

Parts of this post is inspired by a recent discussion on Twitter:

I don’t believe that replacing your existing storage array with a hyperconverged solution, regardless of vendor, by moving your data off the array and on to the local disks in the cluster makes that much sense. Sure, keep your hot and fresh data set as close to the compute layer as possible, but for long time archiving purposes? For rarely accessed, but required data? Why would you do that? Of course, going hyperconverged would mean that you can free up some of that costly array space and leave long term retention data on the array, but does the hyperconverged solution of choice let you do that? Does it even have FC HBA’s? If not, is it cost effective to invest in it, while you at the same time need to keep your “traditional” infrastructure in place to keep all that data available?

To quote Scott D. Lowe:

Any solution that uses standalone storage is not hyperconverged. With a hyperconverged solution, every time a new node is added, there is additional compute, storage, and network capability added to the cluster. Simply adding a shelf of disks to an existing storage system does not provide linear scalability and can eventually lead to resource constraints if not managed carefully.

Doesn’t that really show one of biggest the problems with a hyperconverged infrastructure? If you need to scale CPU, Memory AND Storage at the same time, it makes perfect sense. But what if you need to scale one of the items? Individually? Why should you have to buy more CPU, and licenses, if all you wanted was to add some more storage space?

Of course, this brings the discussion right back to where it started, if you want to scale the various infrastructure components individually, then hyperconverged isn’t the right solution. But if hyperconverged isn’t the solution, and traditional “DIY” infrastructures have to many components to manage individually, then what? Sure, the Software Defined Data Center looks promising, but at the end of the day, we still need hardware to run the software on. The hardware may very well be generic, but it’s still required.

Interestingly enough, a post by Scott Lowe (no, not the same one as quoted above), got me thinking about what the future might hold in this regard: Thinking About Intel Rack-Scale Architecture. To get to the point where we can manage a datacenter like a hyperconverged cluster, and still be able to scale vertically as needed, we need a completely new approach to the whole core architecture of our systems. Bundling CPU, Memory, Storage and Networking in a single manageable unit doesn’t cut it in the long run. Now that the workloads are (mostly) virtualized, it’s time to take a real hard look at how the compute nodes are constructed.

Decoupling CPU, Memory, Storage Volume, Storage Performance and Network into entirely modular units that can be plugged in and scaled individually makes a whole lot more sense. By the looks of it Intel Rack-Scale Architecture might just be that, I guess we’ll see down the road if it´s actually doable.

The software side of things are moving fast, and honestly, I’m kind of glad that hardware isn’t moving at the same pace. At least that gives us breathing room enough to actually think about what we’re doing, or at the very least pretend that we do.