Flash Storage​

 View Only

New Hitachi VSP 5000 Series Accelerates Enterprise Storage and Optimizes Converged Infrastructure Workloads Part 2

By Sean Siegmund posted 12-27-2019 08:15

  

 

Part-2 What does Hitachi Accelerated Fabric mean to the future of Converged Infrastructure

Please make sure you have checked out Part-1 of this blog.

Non-volatile memory express or NVMe is all over the storage market; vendors have adopted this technology into their product as fast as possible. And rightfully so, NVMe is a technology that can help intensive I/O applications but having NVMe in your storage system is only part of the equation. Our Competitors have adopted an end to end NVMe platform, but as you will see, Hitachi is now utilizing a newer efficient way to capitalize on NVMe and any future drive technologies. The Hitachi VSP 5000 series is an enterprise storage array, built to handle applications with strict requirements on downtime, and perform at the pace of innovation.  At 21 million IOPS, 70 μs latency, 149GB/s bandwidth, and 287PB of capacity, we have shattered the glass ceiling in storage performance enabling more consolidation of workloads.  But what is at the heart of being the world’s fastest NVMe flash array?

PCI-Express (PCIe) Transport is a technology quickly emerging with indications of changing the datacenter as we know it today. As the first enterprise-class storage array, including PCI-e switching, the Hitachi VSP 5000 Series is embracing PCIe transport at the heart of Hitachi Accelerated Fabric. Advancements in storage and processor technologies are usually released on PCIe first. Flash, NVMe, GPU, FPGA all utilize PCIe to be incorporated into a server or storage and deliver top-tier performance. To cluster these resources together, they must be able to communicate. Traditionally this communication has happened via ethernet.  A low latency network like InfiniBand (along with RDMA) or converged ethernet RCoE is excellent, but it introduces an I/O barrier that sufficed in the past and due to advancements in PCI-e Transport technology, may not be able to compete with PCI-e switching in the future.  Enlightened architectures will embed PCIe transport in internal hardware components, and small clustered systems, like Hitachi, have done in our VSP 5000 array, making it the fastest storage in terms of Latency and IOPS. (The top two performance characteristics of any storage platform.) There is also the potential to see PCI-e Switching in the datacenter for communication, pending innovation on PCIe switching on a larger scale. This alternative to low latency ethernets gains three significant advantages in connections between systems or components by reducing ethernet protocols, costs, and lowering latency.  Think of it this way, as we see 5G technology emerge and its importance to IoT, doesn’t it make sense to continue to reduce latency with PCIe switching technology, especially when bridging components on edge to data in the data center? Hitachi has taken the first step to embrace PCI-e switching internal to our array. With the need to lower costs wherever possible and latency, one can induce that this will not be the only use case for PCI-e switching.

Let’s look at how our storage competitors utilize NVMe within their storage arrays and what does this mean for converged infrastructure. Below you will see a chart that lists out our competitors and the communication technology they are utilizing for NVMe end to end, including Hitachi Vantara

Vendors

Controllers

Response Time

Interconnect Technology

Hitachi VSP 5000

2 to 12 HA Pairs

70 μs

Hitachi Accelerated Fabric (PCIe Switching)

Dell PowerMax 8000

1 to 8 (1 to 2 on 2000)

100 μs

InfiniBand Dual Redundant Fabric (Ethernet)

Pure FlashArray //X

Active/Passive Dual Controller

150 μs

NVMe-oF with RDMA (Ethernet)

NetApp AFF A-Series

2 to 12 HA pairs

100 μs for file / 500 μs for SAN

NVMe/RoCE (Ethernet)

IBM Flash system9100

Active/Active Dual controller canisters

100 μs

NVMe-oF with RDMA (Ethernet) based on InfiniBand

 

Each of these interconnect technologies tries to mimic the lowest latency ethernet that they can with the technology available to them. With PCI-e switching, there is no need for ethernet to achieve lower latency, and this is cheaper in comparison. By moving away from our previous SAS designs into the Hitachi Accelerated Fabric, we have achieved 70 μs within the Hitachi VSP 5000 platform. With the inevitable introduction of newer technology (datacenter PCI-E switching or faster drives), we can see latency reduced even better than our competitors. Now why does this matter are we splitting hairs over latency, will anyone know the difference?

As I hinted before, we all know the benefits of latency reduction when it comes to applications in the datacenter. 5G will enable higher data speeds and reduced latency on wireless technology, allowing real-time experiences. IoT, autonomous vehicles, factory robots are just some of the things that will benefit from this technology. Latency is and will continue to be improved to help power technology beyond what we know today. As you start to apply this to the real-time scenario, the latency challenge is at the core of this evolution.  Can you image the latency needed for a specialized doctor to remote operate on a patient in a rural town? The data that this would generate would need to go somewhere, and the current architectures will need to be robust enough to document it without becoming a bottleneck to the procedure itself.

Hitachi Accelerated Fabric is the innovation that powers the Hitachi Storage array to new performance and flexibility levels. And to further accelerate the latency benefits of PCI-e Switching, FPGA technology is used to embrace mixed workloads and media types. This technology enables converged infrastructures to consolidate more workloads and optimize reads, deduplication, and encryption operations without impacting application performance.  

NVMe is all the rage, but not every application will need it to run efficiently.  In previous storage releases, you have seen the benefits of Hitachi Virtualization, dynamic pools, and dynamic tiering, which collectively lower costs and create data elasticity in Hitachi arrays. This technology is still an essential part of our storage. The Hitachi Accelerated Fabric and VSP 5000 have added a feature that our customers will benefit from, for years to come, these innovative solutions prove to provide answers today and into the future. Hitachi Accelerated Fabric was designed to reduce latency when handling NVMe, there are also other benefits to this technology, as it allows for the seamless intermix of media drives, delivering a benefit to all I/O workloads and brings peace of mind to you the customer.

From a converged infrastructure standpoint, you can house within a single management framework multiple workloads, delivering I/O quality assurance to numerous business lines, and simplifying data management as you eliminate data silos. Lastly, as you remove silo’s, free up floor space and reduce the power footprint in your data center, you can also remove redundant data footprints. The VSP 5000 array also utilizes machine learning models to deduplicate data in-line or at rest using the LZ4 compression algorithm with the Adaptive Data Reduction feature from Hitachi SVOS.   

If you have deployed any of the Cisco Compute, Networking or Hitachi Storage, HCI, CI Solutions and would like to share your experiences, please join our community. We’d love to hear from you!

 

Sean Siegmund

Technical Advisor, Hitachi Vantara

sean.siegmund@hitachivantara.com

 


#Blog
#ThoughtLeadership
#FlashStorage
0 comments
10 views

Permalink