When Virtual Volumes were announced many of us in HDS spent a lot of time delivering technical workshops with customers.
Every discussion garnered more and more questions, either Technical or Operational.
Most questions related to the How of migration to Virtual Volumes, or the How of co-existence with traditional VMFS filesystems.
Truth in the VC-DC
In 99.9% of VMware sites, vCenter is a single point of failure. If it fails, you cannot perform specific provisioning, configuration and operational task against virtual machines.
While you can protect it with a Windows cluster, the concept of single point of failure has never overly bothered most customers.
vCenter Server Heartbeat was not widely adopted. Some say for cost reasons but the SLA payback or requirements driving it didn’t seem to justify the cost.
VMware has formally compartmentalised functions into silos such as Management, Edge and Workload clusters in VMware Validated Design and NSX Design guides. In reality, customers had already nominally assumed that if vCenter dropped (control plane management function), it would not be the end of the world. Nobody lost too much sleep about it once workloads were still running.
What's good for the goose .......
When we switch to talk about Virtual Volumes being created on-demand, the concept of single point of failure is now a huge issue for many customers. I’m not sure why this is the case?
If you have a single-homed VASA provider or even a highly resilient configuration, the same vCenter SPOF consideration applies.
So why assume one appliance might fail quicker than another ?
It’s fair to say that there are no “gimme’s” in technology, never mind real life. There is always a price to pay, compromise to make, or a constraint to work within, when evolutionary technology arrives. In the initial release of VVol (VASA API 2.0 framework) the technology used in-band communication with the VASA provider. This meant standard operations like Power on, Create Disk, Expand Disk, for virtual machines could not execute without a running VASA provider. In VASA 3.0 API this will no longer be the case.
The VASA provider function is now pure control-plane, but still supporting the Storage Policy-Based Management framework. This seems to me to be an optimal configuration.
So will that be enough for adoption?
Call it like it is; At the heart of VVols is a data migration challenge for customers, with a fundamental change in architecture as well as the ensuing operational model. Vendors are continuously evolving how architectures are aligned with the VMware VVol framework. We also learn from our mistakes and customer behaviour. We have delivered massive advances in features and functionality in the last 9 months as well as huge architectural improvements.
For vendors, interoperability with existing features will take time to wash out. For VMware, that means SRM integration among other things.
For Hitachi it means Global Active Device technology (Killer feature our customers love) among many considerations.
When VASA 3.0 arrive features like Replication will finally be natively supported by VMware VASA API framework.
What does HDS have now ?
Today we support Replication, High Availability of the VASA provider framework as well as upcoming support for Global Active Device.
Nobody should underestimate the engineering challenges for Hitachi to bring industry-leading features like GAD together with VVol in a stable configuration for the most critical workloads.
With Global Active Device and VVol you can have a stretched volume, natively supported within two arrays in two sites, that can be coupled with a per-application (even per disk) SLA and applied to applications, via Policy-based framework. Doesn’t that make much more sense than all applications and virtual machines having a fault-tolerant architecture when 80% of the workloads don’t even need it ? It is likely we will evolve to a hybrid model where we use the right tool for the job.
All we are saying …. Is give VVols a chance
Let’s give VVol a chance. Many of our customers are embracing the opportunity to bring VVol into their core DC architecture. But it will involve migrating VMs and application, ensuring they have enough swing space. Who’s to say what the adoption level will be. Only time will tell.