Champions Corner

Be the first person to like this.
Introduction This blog shows how to implement Microsoft Hyper-V Failover Cluster over Fibre Channel (FC). All procedures outlined are conducted purely within a Lab testing environment. Hyper-V is a Microsoft hardware virtualization solution that enables the creation and operation of virtual machines (VMs). Each VM functions as a standalone system capable of running its own operating system and applications, providing enhanced flexibility, cost savings, and efficient use of hardware resources. Hyper-V operates each VM in its isolated environment, enabling multiple VMs to run concurrently on the same hardware. This setup prevents crashes impacting ...
0 comments
1 person likes this.
Introduction This blog shows how to migrate 3 Data Centre (3DC) setup pairs from a Hitachi Virtual Storage Platform E590 (VSP E590) storage system using a VSP G600 storage system as a remote storage system to a VSP One Block 28 (VSP One B28) storage system with VSP F900 as the remote storage system non-disruptively. Environment The migration environment consists of: ·         Source Primary: VSP E590 storage system with microcode 93-07-23-40/01 ·         Source Secondary: VSP E590 storage system with microcode 93-07-23-40/01 ·         Source Remote: VSP G600 storage system with microcode 83-05-51-40/00 ...
0 comments
3 people like this.
Datacenter SAN networks have evolved drastically over the years, Fibre Channel (FC) was introduced in 1994 to specifically manage SCSI protocol mapping and simplify the architecture of storage transport for the time.   By the early 2000s, other transport options started to appear such as iSCSI for traditional hard disks and FICON for mainframe, and by the end of the decade transport options such as FCoE which unified FC and Ethernet traffic were adopted.   As faster flash storage started to appear in the storage world, many technology adopters were not getting the envisioned performance due to protocol bottlenecks.   By 2013, Non-Volatile Memory Express ...
0 comments
5 people like this.
Reduce CO2 emissions, go green, save the planet, these are just some of the buzz words we hear within the datacenter industry, but how many vendors can you say are actually tackling the problem today?   Data center sustainability is the practice of making data centers more energy efficient and environmentally friendly. It's important because data centers consume a lot of energy, and this demand is growing.   Strategies to tackle sustainability include the use of renewable energy, reducing waste, and utilizing energy-efficient technologies. Hitachi Vantara as a premier storage vendor tackles this issue head on.   With the release of our VSP One Block storage ...
0 comments
Be the first person to like this.
Introduction This blog describes the FC-NVMe HPP path auto-discovery delay observed in VMware ESXi 8.0 Update 1 during link-up events. The ESXi 8.0 Update 1 is a General Availability designation. In situations involving FC-NVMe, a delay is observed in path recovery following a namespace path failure. Block Diagram The following diagram shows the FC-NVMe block diagram Figure: FC-NVMe block diagram What is HPP? The High-Performance Plug-in (HPP) is a multipathing software from VMware for storage devices used in ESXi hosts. The default multipath package used by ESXi host is Native Multipathing Plug-in ...
0 comments
20 people like this.
Introduction Data Availability ensures reliable access to information, a critical requirement from any storage provider. This concept involves the infrastructure, systems, processes, and policies organizations use to keep data accessible and usable for authorized users. As data volume and complexity grow, organizations allocate more resources to maintain reliability. However, in a TCP/NVMe storage environment, resource scaling has practical limits. This blog shows a scenario where multiple NVMe namespaces are allocated to an ESXi host through multiple NVMe subsystems and controllers. Such configurations can lead to resource exhaustion during error ...
9 comments
25 people like this.
Introduction Data is a critical asset, and as data volumes increase, additional storage space becomes essential. By combining disks of varying capacities, such as 1TB, 3TB, and 5TB, you can create a storage pool with a total capacity of 9TB. On this pool, you can allocate multiple virtual chunks of different sizes for writing data. These chunks reserve their full capacity, even if no data resides on them. This concept is known as thick provisioning . Dynamic Provisioning Pool (DP-Pool) or thin provisioning addresses this limitation by dynamically allocating storage capacity based on actual data usage. Thick Provisioning versus ...
11 comments
25 people like this.
Introduction AWS aims to deliver 99.99% uptime in cloud environments, using commercially reasonable efforts to achieve the availability target within each monthly billing cycle. Despite this high level of service, downtime may occur approximately 8.6 seconds per day or over one minute per week, potentially causing data loss or application disruptions. Ensuring data resilience and high availability is essential, particularly for critical applications. The Reserve Rebuild capabilities of Hitachi Virtual Storage Platform One Software-defined Storage Block (VSP One SDS Block) enhance fault tolerance during Elastic Block Store (EBS) failures. ...
15 comments
Be the first person to like this.
Introduction   This blog describe s the protection of backup data using Write Once Read Many ( WORM ) feature o n Hitachi Virtual Storage Platform One File ( VSP One File ) and Hitachi Data Protection Suite (HDPS) .   VSP One File supports WORM file systems , widely used to store cri tic al data in an unalterable state for a specific duration.   HDPS provides a WORM storage lock option for both deduplicated and non-deduplicated data in disk libraries , offering data security at the hardware level.     In this scenario , we validate ...
0 comments
Be the first person to like this.
This blog provides the key considerations and best practices for setting up global-active device effectively. Introduction Global-active device (GAD) is a data mirroring technology that enables high availability and disaster recovery for storage systems, allowing you to maintain synchronous copies of data at remote sites. To ensure optimal performance and resilience, it is crucial to follow best practices when configuring global-active device. What is Global-Active Device? Global-active device is a technology designed to ensure high availability ...
0 comments
Be the first person to like this.
Introduction This blog introduces the essential APIs for managing Hitachi Virtual Storage Platform One Block 20 (VSP One B20) storage systems: Hitachi Ops Center API Configuration Manager REST API (CMREST)   Hitachi Virtual Storage One Block Platform REST API (PFREST) VSP One Block Administrator API   Users have access to a wide range of APIs for managing VSP One Block 20 storage, providing significant flexibility to customize their storage solutions. However, this increased flexibility can sometimes result in reduced clarity. This blog delves into the various APIs offered for VSP One Block 20 storage systems ...
0 comments
1 person likes this.
Author: @Malaya Acharya Introduction The Thin Image Advanced snapshot software enables you to rapidly create point-in-time copies of mission-critical information within the storage system or virtualized storage pool without impacting host service or performance levels. Compared to Thin Image, Thin Image Advanced significantly improves Split state performance and copy operations elapsed time. The enhancements are because of the change in how data is written after a snapshot pair is split. For Thin Image Advanced, Data Reduction Shared (DRS) volumes with Adaptive Data Reduction (ADR) enabled are used. Because ADR stores data in 8 ...
0 comments
3 people like this.
Introduction Universal Replicator (UR) provides a solution for the recovery of processing operations when a data center is affected by a disaster situation. In a Universal Replicator implementation, a secondary storage system is located at a remote site from the primary storage system at the main data center, and the data on the primary volumes (P-VOLs) at the primary site is asynchronously copied to the secondary volumes (S-VOLs) at the remote site.   REF   _Ref168675827 \h   \* MERGEFORMAT Figure 1 08D0C9EA79F9BACE118C8200AA004BA90B02000000080000000E0000005F005200650066003100360038003600370035003800320037000000 shows an ...
1 comment
1 person likes this.
In an era where businesses are defined by the services and capabilities provided to end customers, choosing the best in-class storage system in conjunction with a premier compute system is key to business success. With multiple vendors, it becomes more difficult to gauge what features and capabilities would actually meet the target ROI to not only sustain but grow their business. Cisco Validated Designs (CVDs) consist of systems and solutions that have been designed, tested, and documented to facilitate and improve customer deployments. That is where the experience of Hitachi and Cisco join to provide the Cisco and Hitachi Adaptive Solution (CHAS) which ...
0 comments

ShadowImage Best Practices

1 person likes this.
This blog provides a brief introduction and a set of best practices for one of the core copy program products on the Hitachi Virtual Storage Platform (VSP): ShadowImage (SI). Introduction Hitachi ShadowImage uses local mirroring technology that you can use to create and maintain full copies of data volumes within a storage system. Using ShadowImage volume copies (for example, backups, secondary host applications, data mining, and testing) allows you to continue working without stopping host application input/output (I/O) on the production volume. A typical configuration consists of a storage system, a host connected to the storage ...
0 comments
1 person likes this.
Introduction This blog compares the performance of Hitachi Virtual Storage Platform One Block (VSP One Block) Administrator API and Hitachi Storage Advisor Embedded API across commonly used storage operations. The Hitachi Virtual Storage Platform E790 (VSP E790) storage system is a predecessor to the Hitachi Virtual Storage Platform One Block 28 (VSP One B28) storage system. We conducted a comparative analysis to measure the performance improvement in the newer VSP One B28 model. This analysis examined the key operations run on the VSP One Block Administrator API (VSP One B28) versus the Hitachi Storage Advisor Embedded API (VSP E790).   ...
0 comments
3 people like this.
Objective When a storage system is registered with the API Configuration Manager (CMREST) Server   , the default communication mode between the REST API server and the storage system is set to lanConnectionMode (Out-of-band). However, the processing speed of the API Configuration Manager server is faster in fcConnectionMode (In-band) . This blog shows how to change the communication mode of the API Configuration Manager (CMREST) Server from lanConnectionMode to fcConnectionMode to achieve faster processing speeds. Environment ●        A Hitachi Virtual Storage Platform 5000 series (VSP 5000 series) ...
0 comments
1 person likes this.
Unleash the Power of Automation with VSP One Block Storage Modules for Red Hat Ansible Author: Liam Yu, Senior Product Solutions Marketing Manager , Integrated Systems at Hitachi Vantara Envision a world where creativity and collaboration reign supreme in the realm of artificial intelligence. This year the Red Hat Summit 2024 heralded a new era, one where the open-source ethos transforms open-source generative AI. Imagine a landscape where proprietary models are relics of the past, and businesses flourish with a diverse array of AI models, as accessible and modifiable as a community garden. Red Hat foresees a future where AI isn’t just a tool ...
0 comments
3 people like this.
Introduction The data-at-rest encryption feature, called Encryption License Key, protects your sensitive data against breaches associated with storage media, such as loss or theft. The Encryption License Key feature provides controller-based encryption, along with the following benefits:   ·         Hardware-based Advanced Encryption Standard (AES) encryption using 256-bit keys in XTS mode of operation, is provided. ·         Encryption can be applied to some or all supported internal drives. ·         Each encrypted internal drive is protected with a unique Data Encryption Key (DEK). ·         ...
1 comment