Champions Corner

Be the first person to like this.
Part of the BBVA financial services group, Garanti BBVA is Turkey’s second-largest private bank. Operating in a highly regulated market, Garanti BBVA puts huge emphasis on the reliability and availability of its services. From an IT perspective, this means that outages are not an option. Over several years, the bank has built an IT architecture designed to maximize availability and provide extremely high levels of resilience against disaster, but it was not always easy to manage. Today, the bank uses Hitachi Ops Center Administrator to unify the management of all its Virtual Storage Platform systems, including the VSP 5500, VSP 5100 and VSP E590 systems, and ...
0 comments
3 people like this.
Hitachi’s all-new enterprise storage product, the Virtual Storage Platform ( VSP) 5600  series storage system, proves that it is best-in-class! Additionally, the VSP 5600 NVME storage system broke the record for low latency, achieving 39µs. Along with setting a record for low latency, the hardware design has eight 9’s of availability, which is 99.999999%. In an FC/SCSI environment, the VSP 5600 SAS SSD storage system also achieved a leading rate of  33 Million IOPS . When this latency is compared to competitive offerings, Hitachi reigns: IBM FlashSystem 9500 = Under 50 µ s latency Dell EMC PowerMax = Under 100 µ s latency NetApp AFF A900 ...
3 comments
2 people like this.
Hitachi Vantara has been selected as one of BMW Group’s key strategic partners in enterprise cloud storage and data management for the next six years. Part of the EverFlex from Hitachi portfolio of XaaS offerings, the latest EverFlex Storage as a Service solutions will automate and simplify management of BMW Group’s IT infrastructure to further maintain reliability of their mission-critical operations. New data management capabilities will help advance BMW Group’s hybrid cloud journey and its role as a leader in innovation and sustainability. To learn more: https://www.hitachivantara.com/en-us/news/in-the-press/2022/gl220426.html   #EverFlex ...
2 comments
7 people like this.
Introduction The recently upgraded Hitachi enterprise Virtual Storage Platform (VSP) 5000 series features industry-leading performance and availability. The VSP 5000 series scales up, scales out, and scales deep. It features a single, flash-optimized Storage Virtualization Operating System (SVOS) image running on as many as 240 processor cores, sharing a global cache of up to 6 TiB. The VSP 5000 controller blades are linked together by a highly reliable PCIe switched network, featuring interconnect engines with hardware-assisted Direct Memory Addressing (DMA). In addition, the VSP 5000 cache architecture has been streamlined, permitting read response times ...
1 comment
6 people like this.
Hitachi’s all-new enterprise storage product, the Virtual Storage Platform (VSP) 5600 storage system,   out-performed the previous peak of 21M IOPS (which was achieved with the VSP 5500 storage system) and yet again logged best-in-class performance! Our lab tests confirm its superiority with peak performance hitting a massive 33 million IOPS. This measurement was achieved using a 100%   Random 8KB Read Cache Hit  workload generated by the industry-standard benchmarking tool, Vdbench. When this performance is compared to competitive offerings, Hitachi reigns more than 2x: IBM Flash System 9500R = Up to 16M IOPS (4K Block size) Dell EMC Power ...
1 comment
7 people like this.
Contents Executive Summary Engineered for Performance The VSP E1090 Controllers Hardware Components Front End Configuration Back End Configuration Performance and Resiliency Enhancements Executive Summary The Virtual Storage Platform (VSP) E1090 storage system is Hitachi’s newest midsized enterprise platform, designed to utilize NVMe to deliver industry-leading performance and availability. The VSP E1090 features a single, flash-optimized Storage Virtualization Operating System (SVOS) image running on 64 processor cores, sharing a global cache of 1 TiB. Based primarily on SVOS optimizations, the VSP E1090 ...
1 comment
4 people like this.
Nasuni integrates with HCP via the S3 API, here are quick steps to get HCP set up to integrate with Nasuni. For instructions to set up Nasuni for HCP, please refer to the document How do I configure Nasuni for HCP? System Administrator Configuration Log into the System Management Console (SMC). On the SMC => Tenants page Create a new tenant for Nasuni. Give the tenant enough Hard Quota and Namespace Quota for all the Nasuni filers you expect. Nasuni will create one namespace per volume. On the SMC => Configuration => Protocol Optimization page, set the default to "Default new namespaces to optimize for cloud protocols only." Click Update ...
3 comments
3 people like this.
Introduction Logstash Configuration Input Section Filter Section Output Section Elasticsearch Indexes and Managing Indexes w/ Kibana Visualizing Elasticsearch Data w/ Kibana Monitoring HCP with ELK - Step by Step Step 1: Configure HCP for Monitoring Step 2: Logstash Configuration Step 3: Confirm Index Creation Step 4: Create Your Index Pattern Step 5: Import The Visualizations and the Dashboard Step 6: View The Dashboard Step 7: View and Edit Visualizations Step 8: Get to Know Kibana Tips and Tricks Elasticsearch Kibana Logstash Troubleshooting ...
2 comments
2 people like this.
Introduction Installing ELK Step 1: Disable Firewall or Open Ports Step 2: Install Java Step 3: Install Elasticsearch Step 4: Install Kibana Step 5: Install Logstash Conclusion Introduction This guide is the first in a series explaining how to use open source ELK to visualize the performance of a system. This post includes instructions to install the ELK software. The second guide in the series, Performance Monitoring w/ ELK - Part II: Monitoring HCP Access Logs , gives instructions to configure HCP and your newly installed ELK software to visually monitor HCP. Following the instructions in these 2 posts, ...
2 comments
2 people like this.
The AWS Java SDK does not natively support Active Directory authentication, but it is flexible enough that with a very little bit of coding you can use your AD credentials with HCP over the HS3 gateway. Attached is a working code example that uses active directory credentials to interface with HCP using the AWS Java SDK. This is not intended to be a general S3 programming example (for that see HCP S3 Code Sample ), but is strictly intended to demonstrate how to use AD with HCP and the AWS Java SDK. This is intended for an audience that is already familiar with AWS Java SDK programming. In order for this to work you will need to be on HCP version 8.0 or higher. ...
2 comments
2 people like this.
Hi, this is just a quick post to share a particularly helpful method for troubleshooting issues between a Java client application and the HCP S3 Gateway. Most Java based software will allow you to inject Java System Properties at launch time, either by editing a configuration file or a launch script. This post does not cover how to achieve that step, to answer that question use the product documentation, Google, or ask their support team. If you will be adding Java system properties by configuration you want to add the followinq name value pair (choose the correct value for your system type): Name Value log4j.configuration file:///home/user/log4j.properties ...
2 comments
3 people like this.
HCP chargeback reports contain valuable information that is useful toward understanding HCP utilization and workloads. The problem is that the data can be overwhelming. Trying to understand this data in it's tabular form is not humanly possible. What we need to understand this data is visual representation, but building charts and graphs is time consuming isn't it? Actually no, you can visualize chargeback report data in under 5 minutes using the PivotChart features in Excel. Read on to find out how. In the HCP System Management Console go to the Monitoring => Chargeback page. Select the range of dates you would like to report and choose Hour or Day reporting ...
2 comments
2 people like this.
Headquartered in Austria, PÖTTINGER Landtechnik GmbH is a leading international manufacturer of grassland and arable farming machines as well as digital agricultural technology. PÖTTINGER is growing around the world and along with a high number of digital assets, there was increasing demand on their storage.  They wanted to future-proof their data services, as well as bolster data security and resiliency.   It was also essential that the solution was certified for SAP HANA. Modernizing their infrastructure with Hitachi e-series, global active device and Hitachi Ops Center Administrator and Analyzer, PÖTTINGER was able to increase storage efficiency, simplify ...
2 comments
4 people like this.
Introduction As cloud technologies continue to expand, it’s important that your business leverages cloud technologies to ensure high availability for your products. Although having a physical storage system serve as a quorum at a third site increases redundancy, using global-active device cloud quorum gives you access to more sites increasing the availability of your solution. Additionally, you no longer need to pay for maintenance and deployment of the third storage system because all computing resources are hosted in the cloud and you are only charged on-demand for hardware usage. Global-Active Device Cloud Quorum Global-active device cloud quorum is ...
4 comments
3 people like this.
Introduction Are you still using a physical storage system located in a third physical site for your global-active device quorum? What if you could reduce the cost of global-active device and increase the availability of the quorum? As cloud technologies continue to expand, it’s important that your business leverages cloud technologies to ensure high availability for your products. Although having a physical storage system serve as a quorum at a third site increases redundancy, using global-active device cloud quorum gives you access to more sites increasing the availability of your solution. Additionally, you no longer need to pay for maintenance and deployment ...
3 comments
5 people like this.
As organizations move from evaluating containers and modernizing stateless application, the move to modernizing production critical applications brings new challenges.  Especially to modernizing stateful applications.  Production stateful applications brings storage challenges including; backup, availability, DR, monitoring, and mobility to leverage the new found portability of containers.  These challenges must also be addressed in the proper tools used by Dev team and Ops team to maximize efficiency. While Hitachi offers compute platforms from UCP to host stateless and stateful containers, Hitachi provides several key storage integrations to address production ...
2 comments
5 people like this.
Enterprise Kubernetes distributions like Openshift is deployed as part of customer’s application modernization journey.  Providing benefits in agility and speed of innovation.  Since the unit of compute has become more efficient with containers, it increases portability.  With stateless containers, there’s less dependencies with portability.  However, with stateful containers there’s increased complexity due to the dependencies with persistent volumes.  Not just deploying persistent volumes, but also ensuring the data is protected, available, and moveable.  Hitachi Replication Plug-in for Containers (HRPC) addresses the challenges of a production focused stateful ...
4 comments
4 people like this.
Deploying and running Kubernetes in production also necessitates the need for monitoring and observability of the container platform and its services it hosts.  There are number of tool available to achieve this.  Some of the common tools are Grafana where it provides a central dashboard and Prometheus provides a time series database for telemetry data. Hitachi Storage Plug-in for Prometheus enables the Kubernetes administrator to monitor the metrics of Kubernetes resources and Hitachi storage system resources within a single tool. Hitachi Storage Plug-in for Prometheus uses Prometheus to collect metrics and Grafana to visualize those metrics for easy evaluation ...
3 comments
5 people like this.
Introduction Data in Place allows the current generation of VSP 5000 controllers to be upgraded to next generation of controllers without having to do a data migration. In a traditional data migration scenario the new generation controllers have to be installed, new drive boxes also have to be installed, then the new controllers and drives have to be configured, and finally the data migration has to be done.  These activities are identified as activities 1 2 3 4 on the data migration scenario (Figure 1 left side) While in a Data in Place scenario (Figure 1 rifgt side) the process is less expensive and much simpler as new drives boxes are not needed, no ...
2 comments
6 people like this.
As one of the first enterprise storage vendors to integrate with Cisco Intersight, Hitachi has enabled a magnitude of storage management capabilities that will now be able to be done via Cisco Intersight with the goal of saving end administrators time and frustration. Within Cisco Intersight management platform, admins can utilize the concept of tasks and workflows to easily manage their hybrid IT environments.  Tasks are essentially a library of functions which leverage API invoke calls that can be custom or provided by Cisco out of the box.  These tasks can be compiled to create workflows to enable quick and easy automation of infrastructure without being ...
3 comments