Flash Storage​

 View Only

Introduction to Virtual Storage Platform E1090 Architecture

By Sudipta Kumar Mohapatra posted 03-24-2022 14:26

  

Contents



Executive Summary

The Virtual Storage Platform (VSP) E1090 storage system is Hitachi’s newest midsized enterprise platform, designed to utilize NVMe to deliver industry-leading performance and availability. The VSP E1090 features a single, flash-optimized Storage Virtualization Operating System (SVOS) image running on 64 processor cores, sharing a global cache of 1 TiB. Based primarily on SVOS optimizations, the VSP E1090 offers higher performance with fewer hardware resources than competitors. In addition, the VSP E1090 was upgraded to an advanced Cascade Lake CPU, permitting read response times as low as 41 microseconds and data reduction throughput was improved up to 2X by the new Compression Accelerator Module. Improvements in reliability and serviceability allow the VSP E1090 to claim an industry-leading 99.9999% availability (on average, 0.3 seconds per year of downtime expected). In this blog, we’ll take a brief look at the highlights of the VSP E1090 architecture.

Engineered for Performance

The VSP E1090 features a new controller board powered by two 16-core Intel Cascade Lake CPUs operating at 2.1 GHz. The two CPUs function as a single 32-core MPU per controller. The upgraded controller board adds 14% more CPU power compared to the VSP E990. The extra processing power enables the VSP E1090 to leverage the NVMe protocol, which has a low-latency command set and multiple queues per device, for unprecedented performance in a midsized package. Because NVMe doesn’t allow for cascading connections, the VSP E1090 supports a maximum of 4 NVMe drive boxes connected to the controllers using PCI Express Gen3. This simple, streamlined configuration allows for low-latency, point-to-point connections between the E1090 controllers and the NVMe SSDs.


Figure 1. The VSP E1090 offers flexible configuration options with industry-leading performance and 99.9999% availability

The VSP E1090 Controllers

Figure 1 presents a block diagram of the VSP E1090 dual-controller system. Each interface board (CHB, DKBN) is connected to a controller using 8 x PCIe Gen 3 lanes, which means 16 GB/s of available bandwidth (8 GB/s send, and 8 GB/s receive). The configuration in Figure 1 includes two pairs of NVMe disk adapters (DKBNs) that support 1-2 NVMe drive boxes (DBNs) and up to 48 NVMe SSDs. Each controller in this configuration  has up to 96 GB/s of front-end theoretical bandwidth (if six CHBs per controller were configured), and 32 GB/s of back end bandwidth (two DKBNs). An alternative configuration (not pictured) doubles the capability of the NVMe back end, with four DKBNs per controller supporting 3-4 DBNs and up to 96 NVMe SSDs. The latter configuration would have 64 GB/s of back end bandwidth per controller and up to 64 GB/s of front-end bandwidth per controller (if 4 CHBs per controller were installed). 

Like previous Hitachi enterprise products, all VSP E1090 processors run a single SVOS image and share a global cache. Cache is distributed across individual controllers for fast, efficient, and balanced memory access. Although VSP E1090 hardware and microcode would permit a variety of cache configurations, the only configuration available has the maximum cache configuration (1 TB). Therefore, all eight DIMM slots per controller are populated with 64 GB DDR4-2400 DIMMs for a total of 153.6 GB/s of theoretical memory bandwidth per controller.

ADR performance was improved up to 2X by the addition of 2 Compression Accelerator Modules per controller. The compression accelerators are labeled “ACLF” in Figure 1. As observed in GPSE testing, the compression accelerator improves ADR performance as much as 2X, while also boosting capacity savings. The Compression Accelerator Module allows the CPU to offload the work of compression to a Hitachi-designed ASIC (application-specific integrated circuit). The ASIC uses an efficient compression algorithm optimized for implementation in specialized hardware. The compression accelerator operates on data in cache using direct memory addressing (DMA); it does not require copy operations and can perform work with very low latency. As shown in Figure 2, the Compression Accelerator Module is connected to the controller using eight PCI Express Gen3 lanes. Within the accelerator module, a PCIe switch connects four lanes to each of the two ASICs per Compression Accelerator Module. The compression accelerator occupies unused space in the fan module (two per controller), so each controller gets four compression ASICs.

Figure 2. Compression Accelerator Module Block Diagram

Hardware Components

While the VSP E1090 has a new and faster controller board, other basic hardware components are shared with the VSP 5000 or VSP G900. CHBs are shared with VSP G900. Up to six four-port 8/16/32 Gb FC or six two-port 10 Gb iSCSI CHBs per controller can be installed. (Protocol types must be installed symmetrically between controller 1 and controller 2). For details on CHBs, see the VSP Gxx0 and Fxx0 Architecture and Concepts Guide. DKBN adapters for the all-NVME back end are shared with the VSP 5000, as are the NVMe SSDs, which are available in five different capacities (1.9 TB, 3.8 TB, 7.6 TB, 15 TB, and 30 TB). The NVMe drive box (DBN) is also shared with VSP 5000. However, unlike the VSP 5000, which has strict rules about Parity Group configuration, the VSP E1090 DBN can be ordered in quantities as small as a single tray. The E1090 also includes a new option for a SAS back end. The SAS back end shares the same architecture and components as the VSP G900.

Front End Configuration

VSP E1090 FC ports run in universal (also called bi-directional) mode. A bi-directional port can simultaneously function as a target (for host I/O or replication) and initiator (for external storage or replication), with each function having a queue depth of 1,024  The highest-performing VSP E1090 front end configuration would use “100% straight” access, in which LUNs are always accessed on a CHB port connected to the controller that owns the LUN. Addressing a LUN on the non-owning controller (known as “front end cross” I/O) increases the overhead by a small amount for each command. However, our testing shows that front-end cross I/O does not have a significant performance impact under normal operating conditions (up to about 70% MP busy). Configuring to avoid front end cross I/O is not recommended unless the customer requires the highest possible levels of performance.


Figure 3. VSP E1090 Universal Port Functionality

A front end I/O expansion module (a common component with F900) is also available for VSP E1090. As shown in Figure 4, two CHB slots per controller can be used to connect to as many as four CHBs per controller in the expansion module. With the expansion module in place, a diskless VSP E1090 could present up to 80 FC ports, or 40 iSCSI ports per system. However, note that the eight CHB slots in the expansion module must share the PCIe bandwidth of the four slots to which the expansion module is connected, which may limit throughput for large-block workloads. See the VSP Gxx0 and Fxx0 Architecture and Concepts Guide for more detail on the I/O expansion module.

Figure 4. The I/O Expansion Module Permits Installation of Up to Ten CHBs Per Controller

Back End Configuration

The VSP E1090 has an all-NVMe back end, which makes configuration relatively simple and straightforward. Either two or four DKBNs per controller can be installed. As presented in Figure 5, either CHBs or DKBNs can be installed into slots 1-E/F and 2-E/F in each controller. A configuration with two DKBNs per controller can support one or two NVMe drive trays, and up to 48 NVMe SSDs. With four DKBNs per controller, three or four drive trays can be connected, accommodating up to 96 SSDs (see Figure 6).  Each DKBN has two ports that are connected to two different DBNs using 4-lane PCIe Gen3 copper cables, as shown in Figure 6. Each cable connection has 8 GB/s of PCIe bandwidth (4 GB/s send, and 4 GB/s receive). Each DBN (drive tray) with four standard connections has 16 GB/s send and 16 GB/s receive of PCIe bandwidth. Within each DBN (Figure 7) are two PCIe switches, each of which is connected to two DKBNs using 4-lane PCIe cables. As shown in Figure 7, each NVMe SSD is connected to both PCIe switches using the DBN backplane. In summary, each NVMe SSD can be accessed using a point to point PCIe connection by two different DKBNs on each controller, for a total of four redundant back end paths per drive.

Figure 5. Multi-Purpose Slots Permit Installation of Two or Four DKBN Pairs

Figure 6. Connection Diagram of the Maximum Back End Configuration

Figure 7. DBN Block Diagram

Due to the positioning as a mid-sized enterprise array, the VSP E1090 includes flexible Parity Group configuration. Table 1 shows the supported Parity Group configurations, that can be configured on any combination of 1-4 drive trays.

Table 1. Supported Parity Group Configurations 

Finally, encrypted DKBNs (eDKBNs) are optionally available for the VSP E1090. The eDKBNs offload the work of encryption to Field Programmable Gate Arrays (FPGAs) as shown in Figure 8. The FPGAs allow FIPS 140 level 2 encryption with little or no performance impact. Encrypting DKBNs is also recommended for customers requiring the maximum non-ADR sequential read throughput of 40 GB/s (which is only available in configurations having at least three drive trays). The eDKBNs optimize PCIe block transfers, which requires fewer DMA operations, and improves non-ADR sequential read throughput.

Figure 8. eDKBN Block Diagram

Performance and Resiliency Enhancements

Significant enhancements in the VSP E1090 include:

 

  • Upgraded controllers with 14% more processing power than VSP E990 and 53% more processing power than VSP F900.
  • Significantly improved ADR performance through Compression Accelerator Modules (ACLF). See: E1090 ADR Performance using NVMe SSDs and E1090 ADR Performance using SAS SSDs.
  • An 80% reduction in drive rebuild time compared to earlier midsized enterprise platforms
  • Smaller access size for ADR metadata reduces overhead.
  • Support for NVMe allows extremely low latency with up to 5X higher cache miss IOPS per drive.

 

We’ve briefly reviewed the highlights of VSP E1090 architecture, including improvements in performance, scalability and resiliency. For additional information, please visit the GPSE Resource LIbrary.


#FlashStorage
#HitachiVirtualStoragePlatformVSP
2 comments
186 views

Permalink

Comments

10-23-2023 19:43

Is anyone able to access the  VSP Gxx0 and Fxx0 Architecture and Concepts Guide referenced and linked twice in this document?  I get an authentication failure when trying to access.

05-02-2022 03:45

Excellent Information