Today’s applications require various storage performance levels like IOPS, bandwidth, and latency, all vital storage performance metrics. As we all know, data is raining down on enterprise systems like a storm, flooding the capacity and challenging the IT infrastructure to distribute the data to the application. While optimized resource utilization ensures faster access to more and more data, ideally at a lower cost, unutilized resources mean incurring costs to IT infrastructure.While all-flash arrays (AFAs) are optimized for performance, they must also provide the cloud’s scale as well as the simplicity and shareability of network-attached storage (NAS). In addition, Storage Attached Network (SAN) provides block storage protocols that are not shareable across servers, making them unsuitable for shared storage use cases.
The variety of application file formats, access protocols, and data structures, including shared directory structures and metadata heavy I/O (workloads in the technical computing space, such as Artificial Intelligence (AI) and Machine Learning (ML), genomics, and financial analytics) all result in both sequential access to large files and random access to small files on large data sets. Addressing these workload patterns requires on-demand performance on a massive scale, but there isn’t a single traditional storage design that does so. Because there are so many options, choosing the best storage solution for a specific application workload or environment can be challenging. While some solutions are optimized for scale, others are optimized for performance. The solution has always been using numerous storage systems and advanced data management methods.
So, how can we feed data-starved applications while extracting maximum value from a single solution for all types of workloads to meet our business needs? It’s a highly complex question with a simple answer: speed!
In this blog, we’ll go over a few of the challenges impacting our ability to achieve the business and underlying technical objectives mentioned previously, as well as share our observations from testing Hitachi Content Software for File, which can enable you to achieve your objectives.
- Storage bottleneck: Applications crave more data but are limited based on a lack of storage performance. As a result, response times skyrocket, starving the application data needs and impacting other business applications along the way in the case of shared storage systems. Increasing the computing power is ineffective because legacy storage can’t scale to tens or hundreds of PetaBytes (PBs) while maintaining high performance. On top of that, each application stage has unique computing, storage, and networking needs.
- Metadata bottlenecks: NFS concerns in a scale-out NAS environment include metadata and communication bottlenecks between nodes in the cluster where access to any file must be handled using metadata commands.
- Latency issues: In high-performance computing applications that require more data at the fastest speed, poor storage performance is generally caused by high IO latency.
- Poor write performance: Writing large, sequential files over an NFS-mounted file system can cause a severe decrease in file transfer rate to the NFS server.
- Cluster node bottlenecks: Scale-out NAS systems solved the capacity scaling problem but added a more significant metadata performance problem. As a scale-out NAS system adds more nodes, inter-node cluster communication increases exponentially. The greater the number of nodes, the greater the communication. Any inter-networking issues between these nodes can easily lead to a slow system response time.
- Rogue clients and noisy neighbor issues: A common performance complaint involves hanging file locks by the network lock managers, which can cause applications to slow down or stop. A noisy neighbor is a cloud computing infrastructure co-tenant that monopolizes bandwidth, disk I/O, CPU, and other resources, negatively impacting the cloud performance of other users.
Hitachi Content Software for File (HCSF) is the way.
Hitachi Content Software for File is a high-performance, software-defined, distributed parallel filesystem storage solution that provides customers with the following:
- Highest performance for different workload profiles – ideal for mixed small and large file workloads.
- Is inheritably a shareable platform – Providing all clients sharing capabilities for the same filesystems, so any file written by any client is immediately available to any client reading the data. In technical terms, this means that Content Software for File is a strongly consistent, POSIX-compliant system.
- Distributed capabilities – Hitachi Content Software for File system is formed as a cluster of multiple backends, each providing services concurrently.
- Has a scalable architecture – Linear performance of the Hitachi Content Software for File system depends on the cluster size. Consequently, a certain amount of performance will be received for a cluster of size x, while doubling the size of the cluster to 2x will deliver double the performance. This applies to both data and metadata.
- Provides strong security options – keep data safe from the threat or rogue actors with encryption and authentication.
- Has cloud backup capabilities – push backups straight to a private or public cloud for long-term retention.
- With a scalability architecture more analogous to object stores than NAS systems, a single file system may support trillions of files and billions of directories. Directories can grow without experiencing any performance degradation.
So, how does Hitachi Content Software for File work?
- HCSF is a fully distributed parallel filesystem-based solution that uses x86 servers and common Ethernet network infrastructures to create a high-performance, shared storage pool.
- The HCSF software is a Real-Time Operating System (RTOS) that runs inside containers and is containerized as microservices when deployed.
- HCSF uses performance-optimized networking Data Plane Development Kit (DPDK) and does not use standard kernel-based TCP/IP services. The use of DPDK delivers operations with extremely low-latency and high throughput. Low latency is achieved by bypassing the kernel and sending and receiving packages directly from the NIC. High throughput is achieved because multiple cores in the same host work in parallel without a common bottleneck.
So, how much can Hitachi Content Software for File deliver?
Let’s get right to the point on this topic. In our lab test environments, HCFS has delivered the following industry-leading results for different types of workloads:
These were achieved using HCSF v18.104.22.168-hcsf release using an on-premises 8-Node Cluster.
HCSF v4.0 Testbed:
The test environment included an 8x HCSF20220 server with AMD EPYC 7713P 64-Core processor and 512GB of memory/RAM. Each backend/cluster node server has 20x 3.84TB KIOXIA CM6 Series PCIe 4.0 NVMe SSDs (KCM6XRUL3T84), 2x 200GbE, Dual Port, and ConnectX-6 VPI adapter card [MT28908 Family]. These were connected to 16x load-generating clients, all on a 100GbE environment using 2x Cisco Nexus N9K-C93600CD-GX Ethernet switches.HCSF was configured in Multiple Backend Containers (MBC) way (FRONTEND, COMPUTE, and DRIVES) to use 58 cores, leaving six for running the OS and applications/protocols (SMB, NFS, S3) on the same backend/cluster node server. Various tuning parameters, such as, BIOS, Mellanox NIC, OFED v5.6-22.214.171.124, and RHEL 8.6 as an operating system, were used with a total 400GB/s cluster bandwidth.
Multiple Backend Containers (MBC) and the impact on performance
HCSF v4.0code also saw significant improvements, such as a constraint of only using ‘default’ Single Container, that is, a maximum of 19 CPU cores went away with Multiple Backend Containers (MBC) introduced. MBC allowed more backend host CPU resources (even 58-63 out of the available 64 CPUs) by enabling the deployment of DRIVES, COMPUTE, and FRONTEND (optional) core-specific containers. The Hitachi GPSE Center for Performance and Innovation used up to 58 CPU cores per HCSF backend host. 38 COMPUTE cores (2 COMPUTE containers, 19 COMPUTE cores per container) + 20 DRIVES cores for a 1:1 ratio with SSDs (2 DRIVES containers, 10 DRIVES cores per container) + 0 FRONTEND cores (optional for backend hosts in v4.x) were used in addition to using 8-12 frontend cores per HCSF client for optimum performance.HCSF testbed delivered a peak Sequential Read throughput of 354 GB/s, 89% (of the available total 400 GB/s) bandwidth penetration, whereas it delivered 108 GB/s throughput for Sequential Writes. Random Reads delivered a peak IOPS of 15.4 million with a response time of 0.60ms, whereas Random Writes had a peak IOPS of 3.4 million with a response time of 0.89ms.
- Can scale linearly for random IOPS and achieve in-line sequential bandwidth regardless of the cluster size.
- Ultra-low latency can be achieved for the random IOPS.
Disclaimer: HCSF supports SMB, NFS, and S3 (Object). However, the performance numbers were obtained and are restricted to the usage of HCSF clients (as load generators) that mount the HCSF filesystem (wekafs) and generated load using Oracle Vdbench-50407.
To learn more about HCSF and its performance capabilities, visit.
#HitachiNetworkAttachedStorageNAS #SoftwareDefinedStorage #DistributedParallelFiles #SystemStorageSolution #AIStorage #HighPerformanceComputingSolutions #ParallelFilesSystems #FileBasedStorage #DistributedFileSystem