VMware vSAN: 2016 Edition

Both in 2014 and in 2015 I wrote pieces on the current status of VMware vSAN, and it’s time to revisit it for 2016.

My previous posts:

2014: VSAN: The Unspoken Future
2015: VMware VSAN: More than meets the eye.

vSAN 6.5 was released with vSphere 6.5, and brings a few new features to the table:

  • Virtual SAN iSCSI Target Service
  • Support for Cloud Native Apps running on the Photon Platform
  • REST APIs and Enhanced PowerCLI support
  • 2-Node Direct Connect
    • Witness Traffic Separation for ROBO
  • All-Flash support in the standard license (Deduplication and compression still needs advanced or enterprise)
  • 512e drive support

In my opinion, the first three items on that list are the most interesting. Back in 2015 I talked about VMware turning vSAN into a generic storage platform, and the new Virtual SAN iSCSI Target Service is a step in that direction. This new feature allows you to share vSAN storage directly to physical servers over iSCSI (VMware is not positioning this as a replacement for ordinary iSCSI SAN arrays), without having to do that through iSCSI targets running inside a VM.

The same goes for Cloud Native Apps support, where new applications can talk with vSAN directly through the API, even without a vCenter!

Both of these bypass the VM layer entirely, and provide external connectivity into the core vSAN storage layer.

Clearly these are the first steps towards opening up vSAN for external usage, expect to see more interfaces being exposed externally in future releases. An object store resembling Amazon S3 doesn’t sound too far fetched, does it? Perhaps even with back-end replication and archiving built-in. Stick your files in a vSAN and let the policies there determine which object should be stored locally, and which should be stored on S3? Or which should be replicated to another vSAN cluster, located somewhere else?

Being able to use SBPM for more than VM objects is a good idea, and it makes management of those non-VM workloads running in your vSAN cluster easier to monitor and manage.

Sure the rest of the items on the list are nice too, the two 2-node Direct Connect feature allows you to connect two nodes without the need for external 10 GbE switches, cutting costs in those scenarios. All-Flash support on all license levels is also nice, but as is the case with 512e drive support, it’s natural progression. With the current price point on flash devices, the vSAN Hybrid model is not going to get used much going forward.

All in all, the vSAN 6.5 release is a natural evolution of a storage product that’s still in it’s infancy. That’s the beauty of SDS, new features like this can be made available without having to wait for a hardware refresh.

#vDM30in30 progress:

Cohesity: My Initial Impression

cohesity-logo

A few weeks back Cohesity gave me access to a lab environment, where I could play around with their HyperConverged Secondary Data solution. For those unaware of their offering entails, it’s simply put a solution for managing secondary storage. In this case, secondary storage is really everything that isn’t mission critical. It can be your backups, test/dev workloads, file shares and so on . The idea to place these unstructured data sets on a secondary storage platform, to ease management and analytics but at the same time keep it integrated with the rest of the existing environment. It’s a Distributed scale-out platform, with a pay-as-you-grow model.

Currently Cohesity supports both SMB and NFS as data entry points, and it also supports acting as a front-end for Google Cloud Storage Nearline, Microsoft Azure, Amazon S3 and Glacier.

Partial Feature List

I won’t go through a complete feature list for the current v3.0 offering, but here are a few highlights:

  • Replication between Cohesity Clusters
  • Physical Windows and Linux support (in addition to VMs)
  • Single object restore for MS SQL, MS Sharepoint and MS Exchange.
  • Archival of data to Azure, Amazon, Google
  • Tape support
  • Data Analytics

Getting data out of your VMs, and onto a secondary storage tier makes sense, even more so when you can replicate that data out of your datacenter as well. This makes your VMs smaller and thus easier to manage.

Naturally I was most interested in looking at this from a vSphere perspective, and that’s what I had a look at in the lab. Backups and Clones are presented back to the vSphere environment using NFS, something that enables quick restore and cloning without massive data transfers to get started.

Without any introduction to the product what so-ever I was able to create Protection Jobs (backups) and clone VMs directly from the Cohesity interface.

screenshot-2016-11-05-23-04-26Creating Protection Jobs:

Creating a Proection Job is easy, select the VMs you want to protect from the infrastructure:
screenshot-2016-11-05-23-06-37

Select, or create a Protection Policy (did I mention it’s policy driven?)

screenshot-2016-11-05-23-07-15

Watch the backups run

screenshot-2016-11-06-23-44-25screenshot-2016-11-06-23-49-24

Creating Clones

The procedure for clone jobs is equally simple

screenshot-2016-11-06-23-45-32 screenshot-2016-11-06-23-46-23

 

 

 

 

 

The Cohesity 3.0 UI is beautiful and easy to work with. As I mentioned in my tweet after looking at this for a little under an hour:

It’ll be interesting to see where this moves from here, but from a purely technical point of view the current offering looks pretty darn good! Of course, I’ve only scratched the surface here playing with backup/restore and cloning only, the platform has much more to offer besides that.

#vDM30in30 progress:

Running a VSAN PoC – Customer reactions

I recently set up a VMware Virtual SAN 6.1 Proof-of-Concept for a customer, configuring a 3-node cluster based on the following setup:

Hardware:c04411605

  • HP ProLiant DL380 G9
  • 2 x Intel Xeon E5-2680 @ 2.50Ghz w/12 Cores
  • 392 GB RAM
  • 1 x Intel DC 3700 800GB NVMe
  • 6 x Intel DC S3610 1.4TB SSD
  • HP FlexFabric 556FLR-SFP+ 10GBe NICs

Virtual SAN Setup:

Since this was a simple PoC setup, the VSAN was configured with 1 disk group pr host with all 6 Intel DC S3610 drives used as the capacity layer, and the Intel DC P3700 NVMe cards set up as the cache. This gives a total of 21.61TB of usable space for VSAN across the cluster. With the Failures-To-Tolerate=1 (the only real FTT policy available in a three node 6.1 cluster) policy this gives 10.8TB of usable space.

vMotion and VSAN traffic were set up to run on a separate VLANs over 2 x 10GBe interfaces, connected to a Cisco backend.

Customer reaction:

After the customer had been running it in test for a couple of weeks, I got a single line email from them simply stating: “WOW!“.

They were so impressed with the performance (Those NVMe cards are FAST!) and manageability of the setup that they have now decided to order an additional 3 hosts, bringing the cluster up to a more reasonable 6 hosts, in a metro-cluster setup, and upgrade to VSAN 6.2 as soon as it’s available.  The compression, deduplication and erasure coding features of 6.2 will increase their available capacity just by upgrading. At the same time, adding three new hosts will effectively double the available physical disk space as well, even before the 6.2 improvements kick in.

VSAN will be this customers preferred storage platform going forward and they can finally move off they existing monolithic, and expensive, FC SAN over to a storage solution that outperforms it and greatly reduces complexity.