Paul Morrissey

Hitachi and VMware Virtual Volumes - Part 2

Blog Post created by Paul Morrissey Employee on Mar 19, 2015

This is a the second part in a series of blog posts where I'll address the many questions that arisen with respect to Virtual Volumes and Hitachi implementation in delivering VMware vSphere Virtual Volumes (VVol)  and the Storage Policy Based Management (SPBM) framework, which VMware recently went GA with availability of vSphere 6. Part 1 of this blog series is here , part 2 is below and part 3 is here

I'll continue to run through a series of questions, put some focus on the important element of SPBM  so let's continue..

  • Would VVol be compatible with older versions of vSphere?

VVol will only work with vSphere 6.0 edition and upward. Customers running vSphere 5.5 and older versions will have to upgrade to vSphere 6.0 to be able to use VVol and deploy a supporting storage array with corresponding storage VASA provider implementing the VASA version 2 APIs

  • What is SPBM

To enable efficient storage operations at scale,  even  when managing  thousands  of VMs, Virtual  Volumes is intertwined with  vSphere  Storage Policy-Based  Management  (SPBM). SPBM  is  the implementation  of  the policy-driven  control  plane in  the VMware SDS  model. The SPBM framework allows both advertising of storage capabilities and capturing  storage service  levels  requirements (capacity,  performance,  availability, etc.)  in  the form  of  logical templates  (VM storage policies). SPBM automates VM  placement  by identifying  an available  VVol datastore that  matches the specified  policy requirements and  coupled with  VVol, it  can dynamically  instantiate  the necessary  data  services when required. Through  policy  enforcement, SPBM  also automates service  level  monitoring and  compliance throughout the  lifecycle  of the  VM to ensure continued matching of VM required storage policy to the advertised capabilities.    A graphic to illustrate-


  • Why are storage containers needed?

Storage Containers are a collection of one or more storage resources and you can define multiple storage containers.  Storage containers provide a logic abstraction for managing very large numbers of Virtual volumes. In the Hitachi implementation, they consist of one or more pools of storage (think of 1 or more HDP and/or HDT pools) or one or more filesystems  (e.g. regular FS#1 and Tiered Flash Filesystem#7 group as storage container) that the storage admin can define.  In the first release, they don’t span outside an array but that will change in subsequent releases. This abstraction can be used for managing multi-tenant environments, various departments within single organization or dedicated resources for certain apps. There is a 1-1 mapping between a storage container and a VVol datastore. Here is link to our architecture slide as a refresher

  • How many storage containers can be mapped to one host?

256 storage containers per host is the limit.


  • Describe the HDS storage capabilities that will be advertised

In the initial releases, HDS is supporting three classes of capabilities.

    • Auto-generated capabilities (e.g. RAID type, encryption),
    • Managed storage capabilities (e.g performance class, availability class, cost class)
    • Custom capabilities (e.g. availability zone)

  • Explain more on the managed storage capabilities, are the same across vendors and how can I use cost

The graphic below shows the managed storage capabilities we are initially focused on that maps to our target market/customer needs. Other vendors may/will have alternatives but I'm sharing below so the community gravitates towards common set. With the Hitachi implementation, storage admin can now discriminate the resources within the storage container and between storage containers based on the all important performance, availability, cost and operational recovery class attributes. For example, They can now have storage resources which will only entertain VMs being provisioned on them that specifically request Tier 1 performance class and Tier 1 availability class [hence avoid noisy or resource hog non-mission critical VM resources mistakenly being placed in the vicinity] or separately have storage resources that can support a minimum of Tier 2 performance class for other business essential apps. The cost class is a control valve that infrastructure teams can use to avoid the issue of business groups/tenants requesting Tier 1 performance for all their VMs when indeed lower cost class resources would indeed meet their business app performance needs. The snapshot backup class will be an interesting attribute capability for those familiar with Hitachi Virtual Infrastructure Integrator.

ms cap.png


  • Why do I need to differentiate storage services

Here is an analogy I've used previously (cheeky video we created below on it). Imagine your CIO tasked you to find a fast, energetic soccer player for his team with the words "don't disappoint me"  From the pool of soccer "storage" resources at your disposal, would't it be helpful if their skills were advertised before choosing that player...check video for player to pick

  • Will PE manage features such as primary dedupe ?

No, Protocol Endpoint (PE) has no relevance here, as it only deals with in-band related activities, such as all important I/O data path and has no interaction with storage capabilities. Storage capabilities such as dedupe become an advertised capability of a resource within the storage container. Dedupe is normally on by default on our storage resource but you could have situation where you create/advertise a resource as non-dedupe enabled.

  • Are there any capacity limitations on Virtual volume?

Virtual volumes can expand as big as the capacity of storage container


  • Do I require a separate VASA Provider for each Hitachi storage array?

We will be providing a Unified VP OVA package which bundles both VPs together to maximize the breadth of storage capabilities but an administrator will likely start up at least one instance of VP-f (file) and one instance of (VP-b). Each VP instance can manage multiple storage targets. I believe the majority will use both and leave the storage capabilities/SPBM framework manage where the VMs get placed to best match the policy requirements.

  • I have Hitachi NAS (HNAS) serving VMs over NFS today in production. What do I need to enable VVol

        See graphic below for hardware and software requirements



  • Would it be possible to migrate current customer’s data stores with VMs to storage containers with VVols

That was in fact part of the original design specs and core requirements. As part of the firmware upgrade for our arrays, our VASA Providers and VVol implementations will allow for hardware-accelerated non disruptive migration between source VMFS/NFS datastores and destination VVol storage container, via storage vMotion.


  • Is their a mechanism to limit the usage of a container for a VMware admin? If there is no control, my customers are afraid that the VMware guys will quickly use all the capacity available ? How can they control this?

The Storage admin controls the creation of the storage container(s) and can decide what capacity to impose when creating those storage containers. The philosophy is that storage admin provides this storage resource, assigns capabilities and then VM admins are free to consume from this container to meet their various VM provisioning needs. There are alarms and events to notify as capacity gets consumed.

  • Will customers need to create policies for every VM?

No, higher level VM administrator/architects will typically create best practice VM storage policies based on their environment or tenant needs. For example, define a VM storage policy for all potential MySQL production VMs or pre-production VMs. These policies then become available to be selected as part of the VM/vApp provisioning process


  • VMware VSAN also promises Policy-based management, then how is VSAN different from VVol?

VSAN is storage management framework for server-attached storage (hyper-converged) whereas VVol framework is meant for external NAS/SAN arrays. Different customers segments will want to use one or the other (or both). VSAN has VSA datastores and VVol has VVol Datastores. They are quite similar with respect to SPBM. Virtual Volumes uses VASA 2.0 to communicate with an array's VASA Provider to manage Virtual Volumes on that array but Virtual SAN uses its own APIs to manage virtual disks. SPBM is used by both, and SPBM's ability to present and interpret storage-specific capabilities lets it span VSAN's capabilities and a Virtual Volume array's capabilities and present a single, uniform way of managing storage profiles and virtual disk requirements.

  • Aren't there concerns about number of LUs to a port given the likely explosion in number of LUs presented to ESX?

No, ESXi hosts only see ALU (Protocol Endpoints) presented, PEs are mapped to ports and never the backend LUs for VVols. The backed VVols are Secondary Logical Units (SLU). VVol over File/NFS is similarly unaffected as it's dealing with PEs as mount points and VVols as files.


  • Do we have demo available of VVol, SPBM with Hitachi to see what we can expect.

        Yes, just wrapping up a new short(ish) video.. I'll post link in part 3 of this blog series


  • One sentence for why choose Hitachi for VMware VVol

Zero Worry. Running Virtual Volumes on Hitachi Infrastructure will bring a robust reliable enterprise journey to software-defined, policy controlled data center.

  • My interest is piqued. I would like to be among the first and/or get more information

        Pop your name and relevant information here. Put VVol as part of description


I'll complete the final part of this blog series (part 3) over the coming days...