Our customers tell us that when they virtualize existing storage it often improves performance. Recently we set out to do what we thought would be a useful measured comparison. Using already fast external storage – Hitachi Virtual Storage Platforms.
The purpose of the testing was to measure the performance capabilities of Hitachi Universal Volume Manager (UVM) software on a Hitachi Virtual Storage Platform G1000 system for a variety of configurations and parameter settings. UVM is the standard SVOS component that enables virtualization of external storage. This testing focused on some commonly asked questions about external storage performance and best practices, including the following:
1. For Hitachi Virtual Storage Platform (VSP), how does its maximum performance compare when virtualized behind VSP G1000 to its direct-connect (or “native”) maximum performance?
2. What are the effects of, and recommendations for setting, Hitachi Universal Volume Manager (UVM) tunables such as cache mode and system option modes (SOMs)?
3. On average, how much latency is added or subtracted to each I/O by Universal Volume Manager?
The arrays and drives used in this testing were as follows:
· VSP G1000 with 16 virtual storage directors (VSDs, the processor boards) and 1 TB cache, connected to virtualized array #1
· VSP G1000 with 4 VSDs and 256GB cache, connected to virtualized array #1
· VSP G1000 with 8 VSDs and 512GB cache, connected to virtualized array #2
· VSP with 2048 x 146GB 15K rpm SAS drives using RAID-10 (2D+2D)
· VSP G1000 with 96 FMDs in RAID-5 (7D+1P), 8 VSDs and 512 GB cache
The performance results achieved are still in the process of being published, but here are a few summary conclusions that can be drawn after measuring the performance of external storage virtualized with UVM on the VSP G1000:
The maximum IOPS the small configuration VSP G1000 delivered from external storage was 538,892 8K random reads from the external VSP, with an average response time of 15.3 ms. This was only 2.2% less than the maximum measured internal IOPS obtained from the same VSP during earlier scalability testing with 2,048 146GB 15K SAS HDDs. On average, the 32 VSD board cores of the small VSP G1000 were about 72% busy at the maximum IOPS rate, so VSP G1000 processing power was not the first constraint on performance. The first limit on IOPS was the external VSP processors, which were 87% busy at maximum IOPS.
The maximum performance of a virtualized VSP was 7.5% higher than its native performance for sequential read workloads, and the same as its native throughput for sequential write workloads. However, 100% random non-HDP workloads reached only 92-98% of native IOPS rates. According to performance monitor, the first constraint on performance during 100% random non-HDP testing was VSP processor busy, with added UVM latency accounting for the reduced IOPS.
When HDP was configured on the VSP G1000 using external pool volumes from the virtualized VSP, the performance obtained was up to 10% higher than native VSP HDP performance. It was possible to deliver “better than native” performance in this configuration, because the HDP overhead was shifted to VSP G1000.
So since customers often virtualize storage that starts out a lot slower than the external VSP systems used in the evaluation, it is easy to see why tell us that when they virtualize existing storage it often improves performance.