As we all know by now, PernixData was gobbled up by Nutanix a while back, and since then there has been a nothing but silence on the future of the FVP and Architect products. Now it seems it’s over. The acquisition trigged a bunch of PernixData employees moving elsewhere, and now it’s the products time to move on as well.
As a part of my Homelab project, I’ve created a proper bash script to provide dynamic DNS updates for external resources, via CloudFlare. More details on the reasoning behind it can be found in Using CloudFlare for Dynamic DNS, but since that was posted I’ve fleshed the script out quite a bit more.
In my previous post, I tried to lay out the foundation and reasoning behind requiring a Dynamic DNS Service, and here is how I solved it using CloudFlare.
First of all, I moved my chosen domain name to CloudFlare, and made sure everything resolved ok with static records. Once that was working, I started playing around with the CloudFlare API, using Cocoa Rest Client. I’m no developer (as is probably very apparent by the script below), nor API wizard of any kind, but it was fairly easy figuring out how to craft a request that lists my DNS zone.
While working on my new Homelab setup, I’ve been investigating ways to provide hostname based access to several web services located in my DMZ zone. Since my provider doesn’t provide static IP addresses, I also need an external Dynamic DNS service, to provide said hostname mappings through the reverse proxy on the inside.
There are loads of Dynamic DNS services available, most of them lets you use some sort of predefined domain name scheme, and point it to your external IP, but I wanted to use a domain name that I own and control. Since I use CloudFlare to provide DNS services (amongst other things) for this very site, it was a natural choice to see if they could fit the bill for my lab needs as well. Turns out, not only can they provide the services I need for free, they also allow me to play around and have fun at the same time!
Way back in 2013, I published Preserve your Veeam B&R Backups Jobs when Moving vCenter, outlining how to «cheat» (by using a CNAME alias) to preserve your Veeam Backup & Replication jobs if you replace your VMware vCenter.
Naturally, when there is a new vCenter instance, all the Virtual Machine Managed Object Reference’s (MoRef) change, which makes Veeam Backup & Replication start a new backup/replication chain, since all VMs are treated as new ones. Not ideal by any means, but at least you wouldn’t have to recreate all your jobs.
A few days ago I decided to go full-on mad scientist in documenting my new home lab / network setup, and I’ve even created a GitHub repository for it. The idea is to create a framework for developing this kind of documentation, heavily influenced by the VCDX methodology and framework. Over time, Conceptual, Logical and Physical designs will be added, as well as configuration settings and operational procedures. Hopefully it’ll also contain some useful diagrams.
While I was away on a two week holiday on the Croatia’s sunny Makarska Rivijera, Eric Siebert announced the result of his annual Top vBlog, and much to my surprise vNinja did quite the jump from last years 46th spot to this years 27th! Honestly, I thought the site would drop out of the the top 50 list this year, but once again I’m proven to be mistaken. Some times being wrong is just great!
A little while ago William Lam published a little python script called extract_vsphere_deployment_topology.py that basically lets you export your current vSphere PSC topology as a DOT (graph description language) file. Great stuff, and in itself useful as is, especially if you run it through webgraphviz.com as William suggests.
The thing is, you might want to edit the topology map, change colours and fonts, and even move the boxes around, after you get the output. If you have a large environment, you might want to combine all your PSC topologies into a single document? It turns out, that’s pretty easy to do!
Way back in late 2014 I volunteered to do technical review for a book called **IT Architect: Foundation in the Art of Infrastructure Design: A Practical Guide for IT Architects. Due to a lot of unforeseen events, the book has been delayed quite a bit, but it’s finally available as hardcopy, paperback and eBook! The book is written by J**ohn Yani Arrasjid, VCDX-001, Mark Gabryjelski, VCDX-023, Chris McCain, VCDX-079 and as the title states it really does lay out the foundation of how to approach infrastructure design in a modern virtualised data center.
PernixData, and Frank Denneman, has released vSphere Design Pocketbook v3. As the title reads, this is the third time PernixData releases one of these books, and I’m honored to be selected amongst the contributors for the second time, this time with a chapter called «VCSA vs Windows vCenter - Which One Do I Choose, and Why?«
Go grab your electronic copy now, and be sure to bug your local PernixData representative for a hard-copy later. I know I will.
Yesterday was my first real day as a Senior Solutions Architect for Proact, and today I flew to Oslo for on-boarding and some face-to-face time with my new colleagues over there. By the looks of it, there is a lot of exciting things in the pipeline, and it we land the things we have started on this should be interesting. Very interesting indeed. In addition to the excitement around changing employers, and roles, some other things have also happened.
I think Seth Godin might have been onto something with «Make something happen», so I did.
Today was my last day at EVRY. Some might already have been aware of this, mostly because of Hoff-Job-Announcement-as-a-Service, but also because of my own tweet as I left the EVRY offices in Bergen as an employee for the last time:
For some time now I’ve been advocating the usage of VCSA instead of the traditional Microsoft Windows based vCenter. It has feature parity with the Windows version now, it’s easier to deploy, gets right-sized out of the box and eliminates the need for an external Microsoft SQL server.
One of the questions I often face when talking about the appliance,_ is how do we handle backups?_ Most customers are comfortable with backup up Windows servers and Microsoft SQL, but quite a few have reservations when it comes to the integrated vPostgres database that the VCSA employs. One common misconception is that a VCSA backup is only crash-consistent. Thankfully vPostgres takes care of this on it’s own, by using what it calls Continuous Archiving and Point-in-Time Recovery (PITR).
Dockerflix is a nice little project that allows you to route your Netflix (and other various streaming services) through a SNI Proxy to access content otherwise geo-blocked. Of course, this requires that you have a VM with for instance an US IP to provide the breakout network, and that’s where Ravello Systems comes into the equation. Luckily as a current vExpert I have access to 1000 free monthly CPU hours of personal/lab usage, all with a choice of regions to put the VM in. Perfect.
Yesterday I saw this tweet from Stephen Foskett:
Dear @YourDailyTechUS,— Stephen Foskett (@SFoskett) December 2, 2015
You appear to rip off whole articles from a wide variety of sources. Is your business model based on plagiarism?
Which spurred a discussion back and forth, with a few rather interesting statements from yourdailytech.com, like this one
Way back in 2014 I wrote a piece called VSAN – The Unspoken Future, and I think it’s about time it got a revision. Of course, lots of things have happened to VSAN since then and even more is on the way, but I think there is more to this than adding features like erasure coding, deduplication and compression. All of these are important features, and frankly they need to be in a product that aims a lot higher than you might think.
I am a curious by nature, and when my colleagues start talking frantic about some system that has crashed, I get curious and have to ask questions. Usually this ends up in me doing a lot of work.
- This, however, was not one of those times.
As a few of you have noticed, I recently changed my title on LinkedIn from Chief Consultant to Cloud Architect in the newly formed EVRY Cloud Consulting division, but what does that mean and perhaps more importantly, why?
The closest description I have found to describe what my new role is this:
During an upgrade from vSphere 5.1 to 5.5, I ran into a rather strange issue when trying to utilize VMware Update Manager to perform the ESXi upgrade.
During scanning, VUM reported the ESXi host as «Incompatible», without offering any other explanation. I spent ages looking through VUM logs, trying to find the culprit, suspecting it was an incompatible VIB. Without finding anything that gave me any indication on what the problem might be, I moved on to looking at the ESXi image I had imported into VUM.
The ESXi Embedded Host Client Fling got an upgrade today, and in addition to new features it now works properly on ESXi 5.5. In addition to this, it’s also available as an offline bundle so you can distribute it with Update Manager.
Since I’ve spent most of my day in esxcli, here is a quick post on how to perform the upgrade from a local http repository hosting the .vib file.
I was recently involved with consulting for a Norwegian shipping company who has quite a few remote vSphere installations, most of them with a couple of ESXi hosts, but no vCenter and hence no Update Manager. While looking at methods for managing these installations, in particular how to facilitate patching and upgrading scenarios, I remembered that way back in 2013, I posted Quick and Dirty HTTP-based Deployment which shows how to use the Python to run a simple http daemon, and serve files from it.
This is a guest post from Shane Williford Sr. Systems Engineer, VCAP-DCA/EMCCAe/Pizza Connoisseur and vExpert.
I work at a school district in the US (Kansas City area). After the school year ended, my Director decided he wanted to upgrade to vSphere6 from vSphere55U2 on a few Hosts we were using with XenApp. We are using XenApp to deliver apps to student labs that utilize an Autocad program. As such, our Hosts also have a graphics card in them – nVIDIA GRID K1. To give the students a bit more graphics power this upcoming school year, we added a 2nd nVIDIA card to each Host. The Hosts are HP Proliant DL380p Gen8 with Intel Xeon X5650 2.67GHz processors and about 296GB RAM. Since we added a 2nd nVIDIA card, we also needed to upgrade the Host power supplies to support the 2 cards’ power consumption (1200W support).