VMworld Europe 2016 – My takeaways

VMworld Europe 2016
VMworld Europe 2016 in Barcelona is a couple of weeks old now, and most of the dust has settled. Besides the general announcements around vSphere 6.5 and surrounding products, the next big thing might just be Cross-Cloud Architecture and of course VMware Cloud on AWS. The announcements around vSAN 6.5 (yes, it is now vSAN and not Virtual SAN/VSAN anymore), are also very interesting. Perhaps it’s time I revisit my earlier VMware VSAN; More than meets the eye post and update it for vSAN 6.5?

What really stands out after having time to digest it, is how VMware and VMworld felt energetic again. The keynotes were good, especially on day 1. That keynote is probably the best VMware keynote I’ve ever seen. Everything VMware has been talking about for years, perhaps without actually being able to get the message clearly across to everyone, seems to click neatly into place now. There is a vision now, a vision you can actually relate to, and believe in. Even the tagline be_tomorrow, makes more sense now.

I don’t know if anyone else has noticed, but it feels like something has changed internally in VMware in the last year and a half or so.  There seems to be a new drive, a clearer focus. To be frank, it feels fun again, something it really hasn’t for the last couple of years.

img_7293

As per usual, my biggest take-away from attending VMworld is networking and talking to real life people —  The same people I “talk” to virtually all the time. I even met quite a few new people this year, and that’s always awesome!

My Session highlights:

I attended a few sessions too, and would like to highlight two of them:

Both these sessions were awesome. If you work as an architect and haven’t had a look at VMware Validated Designs yet, drop what you’re doing and go have a look. Right now.

VMware Cloud on AWS was a little light on details (naturally, since it’s not even released/available yet) but for now this one gave a really good overview of what is is, and perhaps more crucially what it isn’t.

Other highlights:

As a VMUG Leader I  attended the VMUG Leader Lunch, which had an awesome Q&A session with Pat Gelsinger and Joe Baguley —That session should have been recorded too.

img_7301

I met up with Ed and Chris, all three hosts of vSoup were finally in the same city at the same time, for the first time since 2011! We recorded a quick vSoup Podcast, and even got Emad Younis as a surprise guest. That recording is still unreleased, hopefully we can get the audio cleaned up and it published pretty soon.

Overall

VMworld 2016 has left me happy. Happy with the direction VMware is going, happy with the event and really happy I wore that shirt for the vRockstar party. As a side note, my FitBit logged 108,427 steps while I was in Barcelona, not to bad for under a weeks worth of conference.

Now, can someone tell me where VMworld Europe 2017 will be held?

#vDM30in30 progress:
[progressbar_circle percent= 3.33]

vCenter Server Appliance Backups

For some time now I’ve been advocating the usage of VCSA instead of the traditional Microsoft Windows based vCenter. It has feature parity with the Windows version now, it’s easier to deploy, gets right-sized out of the box and eliminates the need for an external Microsoft SQL server.

One of the questions I often face when talking about the appliance, is how do we handle backups?  Most customers are comfortable with backup up Windows servers and Microsoft SQL, but quite a few have reservations when it comes to the integrated vPostgres database that the VCSA employs. One common misconception is that a VCSA backup is only crash-consistent. Thankfully vPostgres takes care of this on it’s own, by using what it calls Continuous Archiving and Point-in-Time Recovery (PITR).

I essence, vPostgres writes everything to a log file, in case of a system crash. Since this is done continuously, every transaction that should hit the DB is recorded and can be replayed if required. From the Postgres documentation:

“We do not need a perfectly consistent file system backup as the starting point. Any internal inconsistency in the backup will be corrected by log replay (this is not significantly different from what happens during crash recovery). So we do not need a file system snapshot capability, just tar or a similar archiving tool.”

With regards to the VCSA, this means that your image level backups  will be consistent, and there isn’t really a need to dump and export the vPostgres DB and then archive that. Yet another reason to switch to the appliance today!

Myth busted!

Relax and virtualize it!

2c10159This is a guest post from Kristian Wæraas
Senior Consultant Datacenter at Datametrix AS
VMware VCP3/5, MCTS Hyper-V, Horizon View and Trend Micro Security Expert.

I am a curious by nature, and when my colleagues start talking frantic about some system that has crashed, I get curious and have to ask questions. Usually this ends up in me doing a lot of work.

– This, however, was not one of those times.

A few weeks ago, one of my colleagues came in late after a long night trying to fix a reoccurring bluescreen on a critical customer database server. Quite drawn in his face he sat down, picked up his phone and called Microsoft Support. I have to admit that I did some eavesdropping on that conversation, as it contained a few interesting tidbits that aroused my curiosity.

-“Physical server” (We still use those?)

-“Database on FC SAN”.

-“Critical data!” (Oh my!)

The minutes went by and turned into hours, and they still tried to fix the server. Diagnostics, rescue disks, rescue console, driver reinstalls, system file checking, fixing mbr and so on – but the server refused to cooperate. At some point Microsoft gave up on fixing the server, and asked if we could just reinstall it, which in this case would take even more hours.

When lunchtime came they had taken a break, and I started asking my colleague questions; not regarding the bluescreen and possible fixes, but more on the basic layout of the system. It turns out that it was an old physical server running Windows Server 2008 R2, it had Oracle database installed with the database-files placed on SAN mounted directly in to the server via FC – A normal setup for database servers I guess. We had a little chat on possible solutions to the problem during lunch, and my colleagues first thought was actually to find an identical physical server so he could install it parallel to the faulty server, then physically moving the fiber cable from the unstable server to the new. I of course asked if we could virtualize the server instead.

My colleague thought the idea was intriguing, but not knowing all the details in VMware’s possibilities had many questions.

-“How will it perform, how do we get the database files copied into the server, how long will it take to get a server ready, we need at least 2 CPUs and 8GBs of RAM, will there be cake?”

I explained to him that performance wise, the virtual server would do just fine, and that we could give it as much resources as it needed. As for getting a server up and running, I suggested using already prepared templates, which would take no more than a few seconds to deploy. Also, and this was my key point in this solution, the file copy is unnecessary:

-“You don’t have to copy the files from SAN into the server; you can just do zoning on the FC switches, and directly attach the datastore as raw disks on the virtual server. The disks will then appear inside the OS as you are used to”.

-“Is all this possible? How do we do it? If it is as easy as you say this would save us hours of work!”

Having done similar setups before, I was quite confident. However “saving” a physical server with real critical data from humiliation by moving datastore into a virtual server was new to me, so I did a quick tweet to my good friend Christian Mohn (vNinja extraordinare) to run my theory by him. We both agreed that the theory was spot on, but none of us had done this job before.

Being afraid of data loss, data corruption and the procedure in its whole, I agreed to do some tests to see if my theory was viable in our situation. We started with a basic SAN backup of the datastore, and then we did the necessary zoning by adding the backup LUN to the VMware host zone-group. After a quick rescan of datastores on the hosts, we saw the new LUNs available for the hosts.

1

The next thing was to add a new disk on a test-server, choose Raw Device Mappings (physical compability mode).

2

Found the correct LUNid

3

When all this was done, we logged into the test-server, went into Disk management and did a “Rescan Disk”. The disk appeared, drive-letter and all:

4

After verifying that the data was there, and that everything looked good, we felt confident that this approach worked, and we did the entire process again with the “live” data.

I always get a satisfied feeling inside when I am able to help a colleague solve an annoying issue. In this case, my actual work took no time at all; I also managed to open the eyes of my colleague who is now planning more p2v migrations. The customer was also happy, which in the end is what really matters.

I think the moral of the story is that “knowledge is power”. If you know what different solutions/products are capable of and you know how to use them correctly, you will be able to solve most problems quite quickly.

And yes, there was cake!