Champions Corner

3 people like this.
Introduction This blog provides insights into the compatibility of the Hitachi Virtual Storage Platform One Block 20 (VSP One Block 20) midrange storage systems with AWS EC2 instances running Red Hat Enterprise Linux (RHEL) 9.4. The primary focus area of this blog is It also offers the users step-by-step instructions on configuring the NVMe/TCP I/O timeout settings on RHEL 9.4 AWS EC2 instance to mitigate potential I/O process hang issues arising from storage hardware failures. What is I/O Process hang issue? An I/O process hang issue occurs when a process in a Linux system enters the uninterruptible sleep state, commonly referred to as the ...
0 comments
4 people like this.
Automating SAP HANA TDI Storage Provisioning with Ansible and Hitachi Vantara VSP One Block Storage In today’s fast-paced IT landscape, agility and reliability are critical, especially when it comes to enterprise applications like  SAP HANA . Enterprises running  SAP HANA  in a  Tailored Datacenter Integration (TDI)  model demand flexibility, performance, and automation. With the  Hitachi Vantara VSP One Block Storage  modules for  Red Hat Ansible , organizations can streamline their SAP HANA TDI storage provisioning process like never before. In this post, we’ll explore how to automate SAP HANA TDI storage provisioning using  Hitachi Vantara VSP ...
0 comments
2 people like this.
Integrating advanced AI capabilities is paramount for organizations striving to stay ahead. Hitachi iQ with NVIDIA DGX ™ H100, powered by Hitachi Content Software for File and Hitachi Content Platform (HCP), offers a groundbreaking solution that meets these stringent demands. Hitachi Content Software for File is now certified for NVIDIA DGX BasePOD™, ensuring optimal performance and reliability for AI and data analytics workloads. Challenges Because of storage limitations, Generative AI models, especially large ones, face significant challenges in unleashing the true power of GPUs. High-performance storage with ultra-low latency is a crucial ...
0 comments
8 people like this.
Introduction This blog shows how to implement Veritas InfoScale 8.0.2 on a virtual machine hosted on an ESXi server, with remote replication by Global-active device (GAD) feature which enables to create and maintain synchronous remote copies of data volumes between primary and secondary storage systems. Block Diagram The following diagrams show the test environment of Veritas InfoScale 8.0.2 on a virtual machine with GAD. Figure 1: Test environment Figure 2: Detailed test environment Hardware Requirements The following table lists the hardware specifications used in the validation: For more ...
1 comment
11 people like this.
Introduction This blog shows the process of determining whether the virtual machine (VM) hardware version is compatible with various VMware products, including VMware ESXi, VMware Fusion, VMware Workstation, and so on. In addition, it discusses the potential issues users may encounter if the VM hardware version is incompatible. The primary focus area is the VMware hardware version compatibility in relation to VMware ESXi product versions. What is Type 1 Hypervisor A Type 1 hypervisor, also known as a bare-metal hypervisor, is virtualization software installed directly on the computer hardware without requiring an underlying OS. It manages ...
3 comments
3 people like this.
Introduction Dynamic Drive Protection (DDP) provides an interleaved DDP group that can be expanded on a per-drive basis. It is possible to configure DDP groups with limited distributing data across the number of drives equal to or greater than the RAID width plus one, DDP achieves a shorter rebuild time. Creating DDP Configuration The available capacity of a DDP group is calculated as follows:         Drive capacity × (Number of drives in a DDP group – Number of assigned spare drives)         For RAID level, select either 6D+2P or 14D+2P by considering the following:  o    14D+2P ...
1 comment
Be the first person to like this.
Introduction This blog shows how to implement Microsoft Hyper-V Failover Cluster over Fibre Channel (FC). All procedures outlined are conducted purely within a Lab testing environment. Hyper-V is a Microsoft hardware virtualization solution that enables the creation and operation of virtual machines (VMs). Each VM functions as a standalone system capable of running its own operating system and applications, providing enhanced flexibility, cost savings, and efficient use of hardware resources. Hyper-V operates each VM in its isolated environment, enabling multiple VMs to run concurrently on the same hardware. This setup prevents crashes impacting ...
0 comments
1 person likes this.
Introduction This blog shows how to migrate 3 Data Centre (3DC) setup pairs from a Hitachi Virtual Storage Platform E590 (VSP E590) storage system using a VSP G600 storage system as a remote storage system to a VSP One Block 28 (VSP One B28) storage system with VSP F900 as the remote storage system non-disruptively. Environment The migration environment consists of: ·         Source Primary: VSP E590 storage system with microcode 93-07-23-40/01 ·         Source Secondary: VSP E590 storage system with microcode 93-07-23-40/01 ·         Source Remote: VSP G600 storage system with microcode 83-05-51-40/00 ...
0 comments
3 people like this.
Datacenter SAN networks have evolved drastically over the years, Fibre Channel (FC) was introduced in 1994 to specifically manage SCSI protocol mapping and simplify the architecture of storage transport for the time.   By the early 2000s, other transport options started to appear such as iSCSI for traditional hard disks and FICON for mainframe, and by the end of the decade transport options such as FCoE which unified FC and Ethernet traffic were adopted.   As faster flash storage started to appear in the storage world, many technology adopters were not getting the envisioned performance due to protocol bottlenecks.   By 2013, Non-Volatile Memory Express ...
0 comments
5 people like this.
Reduce CO2 emissions, go green, save the planet, these are just some of the buzz words we hear within the datacenter industry, but how many vendors can you say are actually tackling the problem today?   Data center sustainability is the practice of making data centers more energy efficient and environmentally friendly. It's important because data centers consume a lot of energy, and this demand is growing.   Strategies to tackle sustainability include the use of renewable energy, reducing waste, and utilizing energy-efficient technologies. Hitachi Vantara as a premier storage vendor tackles this issue head on.   With the release of our VSP One Block storage ...
0 comments
1 person likes this.
Introduction This blog describes the FC-NVMe HPP path auto-discovery delay observed in VMware ESXi 8.0 Update 1 during link-up events. The ESXi 8.0 Update 1 is a General Availability designation. In situations involving FC-NVMe, a delay is observed in path recovery following a namespace path failure. Block Diagram The following diagram shows the FC-NVMe block diagram Figure: FC-NVMe block diagram What is HPP? The High-Performance Plug-in (HPP) is a multipathing software from VMware for storage devices used in ESXi hosts. The default multipath package used by ESXi host is Native Multipathing Plug-in ...
1 comment
20 people like this.
Introduction Data Availability ensures reliable access to information, a critical requirement from any storage provider. This concept involves the infrastructure, systems, processes, and policies organizations use to keep data accessible and usable for authorized users. As data volume and complexity grow, organizations allocate more resources to maintain reliability. However, in a TCP/NVMe storage environment, resource scaling has practical limits. This blog shows a scenario where multiple NVMe namespaces are allocated to an ESXi host through multiple NVMe subsystems and controllers. Such configurations can lead to resource exhaustion during error ...
9 comments
25 people like this.
Introduction Data is a critical asset, and as data volumes increase, additional storage space becomes essential. By combining disks of varying capacities, such as 1TB, 3TB, and 5TB, you can create a storage pool with a total capacity of 9TB. On this pool, you can allocate multiple virtual chunks of different sizes for writing data. These chunks reserve their full capacity, even if no data resides on them. This concept is known as thick provisioning . Dynamic Provisioning Pool (DP-Pool) or thin provisioning addresses this limitation by dynamically allocating storage capacity based on actual data usage. Thick Provisioning versus ...
11 comments
25 people like this.
Introduction AWS aims to deliver 99.99% uptime in cloud environments, using commercially reasonable efforts to achieve the availability target within each monthly billing cycle. Despite this high level of service, downtime may occur approximately 8.6 seconds per day or over one minute per week, potentially causing data loss or application disruptions. Ensuring data resilience and high availability is essential, particularly for critical applications. The Reserve Rebuild capabilities of Hitachi Virtual Storage Platform One Software-defined Storage Block (VSP One SDS Block) enhance fault tolerance during Elastic Block Store (EBS) failures. ...
15 comments
Be the first person to like this.
Introduction   This blog describe s the protection of backup data using Write Once Read Many ( WORM ) feature o n Hitachi Virtual Storage Platform One File ( VSP One File ) and Hitachi Data Protection Suite (HDPS) .   VSP One File supports WORM file systems , widely used to store cri tic al data in an unalterable state for a specific duration.   HDPS provides a WORM storage lock option for both deduplicated and non-deduplicated data in disk libraries , offering data security at the hardware level.     In this scenario , we validate ...
0 comments
Be the first person to like this.
This blog provides the key considerations and best practices for setting up global-active device effectively. Introduction Global-active device (GAD) is a data mirroring technology that enables high availability and disaster recovery for storage systems, allowing you to maintain synchronous copies of data at remote sites. To ensure optimal performance and resilience, it is crucial to follow best practices when configuring global-active device. What is Global-Active Device? Global-active device is a technology designed to ensure high availability ...
0 comments
1 person likes this.
Introduction This blog introduces the essential APIs for managing Hitachi Virtual Storage Platform One Block 20 (VSP One B20) storage systems: Hitachi Ops Center API Configuration Manager REST API (CMREST)   Hitachi Virtual Storage One Block Platform REST API (PFREST) VSP One Block Administrator API   Users have access to a wide range of APIs for managing VSP One Block 20 storage, providing significant flexibility to customize their storage solutions. However, this increased flexibility can sometimes result in reduced clarity. This blog delves into the various APIs offered for VSP One Block 20 storage systems ...
0 comments
1 person likes this.
Author: @Malaya Acharya Introduction The Thin Image Advanced snapshot software enables you to rapidly create point-in-time copies of mission-critical information within the storage system or virtualized storage pool without impacting host service or performance levels. Compared to Thin Image, Thin Image Advanced significantly improves Split state performance and copy operations elapsed time. The enhancements are because of the change in how data is written after a snapshot pair is split. For Thin Image Advanced, Data Reduction Shared (DRS) volumes with Adaptive Data Reduction (ADR) enabled are used. Because ADR stores data in 8 ...
0 comments