Hu Yoshida

What is the next step in Storage Virtualization?

Blog Post created by Hu Yoshida Employee on Nov 7, 2014

Apr 10, 2014


In 2004 Hitachi Data Systems introduced the Universal Storage Platform, USP, which was the first storage array that could attach and virtualize third party Fibre Channel storage arrays. Instead of following other vendors by doing virtualization through an appliance that added complexity and another layer of management in the Storage Area Network, Hitachi chose to do virtualization through the storage controller.  By virtualizing external storage through the USP controller, the virtualized arrays could inherit the latest capabilities of the USP and be managed through the USP management interface. Limited function, dual controller storage arrays could now be virtualized and inherit all the functions of an enterprise array, including a global front-end cache with an extended number of host ports, and full synchronous and asynchronous replication.


The journey to storage virtualization actually started in the prior Hitachi product called the Lightning 9900, which was introduced in 2000.  The 9900 provided virtualization of the host ports so that multiple hosts could attach to the same physical port and be partitioned and prioritized for security and quality of service. This provided greater connectivity and load balancing than what was previously possible.

The 9900 also virtualized the internal cache so that the cache could be mapped to the external disks dynamically. This enabled the disk and host port configurations to be changed without any down time. Many competitive systems still require scheduled down time to change configurations and remap the cache with external BIN files.

Cache partitioning was later added to provide quality of service with mixed workloads, where one host server, typically a mainframe server, might gobble up the cache and impact the other host servers that were sharing the cache resources. Partitioning is a necessary requirement for virtualization. If the purpose of virtualization is to pool resources to be shared among many users, partitioning must be provided so that you can limit or prioritize the amount of resources that one user can consume at the expense of the other.

Tiering is another form of virtualization where you can automatically move data to the right tier of storage resource based on activity or cost. This was another virtualization feature that was introduced with the Lightning 9900.  We called it Cruise Control. This enabled us to set policies and move application LUNs to different tiers of storage triggered by time or events. This ability to move LUNs between tiers on the fly was another benefit of virtualizing our internal cache.

So prior to external storage virtualization with the USP, we were already virtualizing the ports for greater connectivity and the cache for non-disruptive configuration changes and automated tiering.  Since the introduction of the USP we have added the virtualization of LUNs, by changing the internal and external pool of storage into a pool of pages that can be allocated on demand as they are used.  This took tiering to another level. Now instead of moving entire LUNs between tiers of storage, only the pages that need to be moved are moved and the other pages remain where they  are.  This eliminated the waste of allocated but unused storage capacity, simplified storage management, and increased performance with wide striping across many spindles.

When server virtualization was introduced, Hitachi was the first to qualify a storage virtualization array with server virtualization systems and support the offload APIs, which enabled reservation, replication and recovery of virtual machines across multiple sites.

Today we virtualize external storage resources, as well as internal storage resources, creating a common pool of resources that can be managed through one set of tools. We can converge and unify it with our blade servers and third party switches within a Hitachi Unified Compute Platform that can be orchestrated and managed through a hypervisor manager like vSphere vCenter. With the virtualization provided in our storage and our management software, we enable software-defined storage and software defined data center through the UCP with our UCP director software.

This is where we are today. What is the next step in storage virtualization? What is there left to virtualize? Today, the virtualization in Hitachi Storage is based on a switched controller architecture and software in the controller.

What if we could virtualize the controller itself?