Skip navigation
1 2 3 Previous Next

Hitachi Solutions for VMware

78 posts

Introduction:

It’s time to modernize your IT infrastructure, and your team is looking to collect the information needed to select the right foundation that fits your strategic business needs. The ideal solution optimizes existing applications and operates CapEx and OpEx efficiently.   Not all your IT systems will transfer to any cloud platform today, but leaving the systems stuck in a traditional IT infrastructure will mean more headaches to come. Converged Infrastructure (CI)and Hyperconverged Infrastructure (HC) solutions have what you need to manage the day-to-day IT operations and allow you to tackle tomorrows challenges efficiently. The decision on your hands will require a balanced approach to support many of the different workloads and applications living today on your data center floor.

With these needs in mind, Architects at Hitachi Vantara have built a set of UCP solutions targeted for hyperconverged and converged solutions.  These infrastructure solutions utilize the same core hardware components and leverage the same management and automation tools. With these solutions, you can now have workloads fit onto the underlying hardware they will require to keep the IT systems highly available while still being able to meet the performance demands of the business.

Converged Versus Hyperconverged

The popularity of both of these technologies’ rests with the fact that both solutions make it easier for administrators to interact with the underlying hardware. The essential difference between CI and HC is how the storage and network components are deployed and utilized. These solutions utilize virtualization, the same compute CPUs and memory, and they both utilize disks in a hybrid (flash and non-flash) or all flash form. Hyperconverged (HC) Infrastructures, however, use local direct attached storage (vSAN), along with software-defined storage and networking. You can manage all resources from a single administration pane; this includes the hypervisor (virtual) and hardware (physical) elements. Hyperconverged Infrastructure solutions operate in a flat architecture, where the compute nodes are handling the bulk of the work including all storage capabilities.

Converged Infrastructure (CI) combines compute, storage and network as a single resource. The elements are pre-defined by the hardware vendor and can also include automation and orchestration management software. The storage sub-system connects to a SAN and supplies storage to the compute infrastructure. When using Hitachi Enterprise storage, memory DIMMs within the storage platform are caching all I/O, and the storage array is capable of 32 Gb FC speeds per port.

The focal point with these two technologies comes down to a few aspects - are your compute resources budgeted to do specific clustered workloads, or do you need infrastructure that will have dedicated components for the network and storage layers? Can the business benefit from a self-service model, or will the mundane tasks of today be benefited by the orchestration of a CI solution?

CI and HC solutions have a various number of trade-offs and benefits, but everyone should evaluate these infrastructure solutions on an organizational and divisional basis when attuning HC servers for a given workload. The workloads running within an HC environment should have the same characteristics to share the HC resources properly. Otherwise, competing workloads can be migrated to another HC cluster. HC will allow end users to properly test and tune developmental tasks without the need for significant interaction with HC infrastructure administrators. HC allows IT departments and stakeholders to deploy on-demand with a tailored HC platform while enabling a common administration framework across all HC solutions implemented.

Conversely, CI solutions embrace many different workloads due to the features of an Enterprise Storage system and other hardware components.  Within a virtualized environment, CI accelerates hardware deployment of new systems and increases performance and resource utilization. With a CI system, you can rest assured that the system is ready to function with no interoperability pitfalls.  CI systems are a set of resources which can be provisioned across multiple divisions while using a qualified hardware platform common for IT resources.

Hitachi has a proven track record of deep expertise in striking the right balance within each of these platforms, cementing again the need to have a partner with expertise in the right fit for your business and not specialized in any particular infrastructure. As part of your digital transformation, do you have a partner skilled in positioning the best overall infrastructure solution to meet your business needs?

Hitachi UCP HC

The Hitachi Unified Compute Platform (UCP) for HC is a set of compute nodes which have been approved by VMware to run as a vSAN ReadyNode configuration. Hitachi utilizes DS120, DS220, and DS225 models as the compute resource for Hitachi UCP HC platform.  Several configurations have certified ranging from HY4 – HY8 to AF4 – AF8 series. With the various contributions, Hitachi has made available a wide range of offerings that will match the specific workload which the platform needs to handle.

A vSphere cluster comprised of vSAN ReadyNodes is closely tied to ESXi versioning and other software related components from VMware. All of the Hitachi UCP HC configurations allow for SSD cache and your choice of a hybrid solution using SATA or an all-flash solution within the capacity tier. Within the UCP HC solution, the use of NVIDIA GPUs is also available, allowing for improved performance of high demand workloads.

In September 2018, the Hitachi UCP HC solution included an important emerging technology, which is sure to be part of the future of all modern-day Datacenters. NVMe is a fast data access protocol, improving performance flash and other non-volatile storage devices. It debuts within the Hitachi UCP HC platform and will be rolled out into all of the Hitachi Storage Product Line starting 2019. For more information about the UCP HC All-NVMe solution, please see the following links.

  • Announcement

            https://www.hitachivantara.com/en-us/news-resources/press-releases/2018/gl180926.html

UCP HC product management blog – Hitachi + VMware vSAN + Intel Optane NVMe = Turbocharged Hyperconverged Infrastructure
https://community.hitachivantara.com/people/DISINGH/blog/2018/09/26/hitachi-vmware-vsan-intel-optane-nvme-turbocharged-hyperconverged-infrastructure

The UCP HC solution grows as needed simply by adding additional nodes to the configuration, where competitors upgrade in blocks of nodes. With software-defined storage and network, there is no need for extra backend storage or resources outside of the UCP HC compute cluster. Hitachi UCP for HC gives teams the simplified infrastructure they need to align with their business needs and sustain important SLA’s. In a vSphere vSAN-based cluster, all of the VMware tools required for storage and network are included to maintain, expand and support critical business applications. For more information and overview of the Hitachi UCP HC platform, please see the following link:

https://www.hitachivantara.com/en-us/products/converged-systems/unified-compute-platform-hc-series.html

Hitachi UCP CI

The Hitachi UCP CI platform is a set of compute nodes, storage systems and network switches combined to present a pre-tested, pre-configured converged infrastructure with VMware vSphere. The solution is delivered as a single rack or in a multi-rack configuration. Hitachi UCP CI utilizes DS120, DS220, DS225, DS240 for compute and VSP Gx00 or VSP Fx00 for storage. UCP CI offers a wide range of element configurations for flexibility and performance regardless of workload requirements. These components can be customized and budgeted to consolidate your current mixed workloads on to a single infrastructure platform. You can grow the platform when needed and optimize the configuration to efficiently use resources to handle the demands of the business today and be ready for tomorrow.

UCP CI integrates management infrastructure, storage systems, and physical switches for network and SAN. As a made-to-order infrastructure, there is a wide range of configuration choices to handle any virtualized workload.  Each compute node has Intel Xeon Skylake processors in Silver, Gold, and Platinum CPU configurations.

The UCP CI platform allows for the use of Hitachi VSP G (Flash Optimized) and Hitachi VSP F (All-Flash) disk storage systems. Based on Single Rack or Multi Rack designs, the choice of the following  storage systems can be selected:

  • VSP G200
  • VSP G/F350
  • VSP G/F370
  • VSP G/F400
  • VSP G/F600
  • VSP G/F700
  • VSP G/F900
  • VSP G/F1000
  • VSP G/F1500

UCP CI supports using the same node type across the solution, or an intermix of DS120, DS220, DS225 and DS240 nodes along with the many different Hitachi Storage Product offerings. The UCP CI infrastructure is an agile environment where Hitachi customers benefit from placing many different IT solutions within the same UCP CI. With our latest release of UCP CI, Hitachi Engineering has developed a seamless integration of HC nodes into a standard UCP CI platform. The incorporation of these solutions means you can now utilize UCP CI-based nodes for your mixed workload environments and run HC vSAN-based clusters side-by-side – all in one single Hitachi platform. For more information and overview of the Hitachi UCP CI platform, please see the following link:

https://www.hitachivantara.com/en-us/products/converged-systems/unified-compute-platform-ci-series.html

Hitachi UCP Advisor

UCP Advisor management software from Hitachi supports both the UCP CI and UCP HC solutions. The Hitachi UCP Advisor provides detailed information about the infrastructure components and allows you to manage operations for connected devices. The UCP Advisor software is deeply integrated with VMware leveraging vCenter server and VCSA. It offers exceptional value for both platforms and the administrators who maintain and deploy them.

UCP Advisor simplifies infrastructure operations. Seamless integration allows automated provisioning of the UCP systems for both converged and hyperconverged infrastructure. It[CG1] provides unified management, central oversight, and smart life-cycle management for firmware upgrades, element visibility, and troubleshooting.

Benefits of UCP CI and UCP HC

With the combination of these two solutions and their flexibilities, you will see the Hitachi Converged strategy as a foundation for many of the practical challenges facing our customers today. IT departments are having to manage resources with an ever-growing amount of scrutiny. New technologies are continuing to come to market at a faster pace. Today more than ever you need a strategic infrastructure solution which can cover all of your stakeholder's needs. While each department within your organization may have different needs, does it make sense for each department to purchase vendor defined hyperconverged and converged platforms which your company will need more specialized skills to support?

Infrastructure solutions like UCP CI and UCP HC are transformative architectures when compared to traditional infrastructures. However, this transformation does not need to be disruptive to every application in your domain. Starting early to identify which IT systems have the best chance to succeed in a self-service hyperconverged model, or within a converged infrastructure where workloads can be accurately evaluated to graduate into a hyperconverged tier.  You now have a platform from Hitachi that will support both technologies without harmful waste of assets or IT spend.

Now all parts of the IT systems can see some advancement with the available architectures, and the blending of UCP HC with UCP CI can allow for a phased approach to your every changing landscape.  As your partner in your transformation, Hitachi is the best choice to transform your IT platform and will partner with you through all phases of your transformation efforts.

 

Conclusion

Hitachi Vantara has played an essential role in the IT systems of the past and will indefinitely for the future. With over 100 years of experience in the “Hitachi Way,” aren’t you interested in partnering with a company whose engineering spirit is alive and thriving in many of the Fortune 500 companies today? As early adopters of both technologies, the Hitachi roadmap is all about building value for continued customer satisfaction.  Partner with Hitachi Vantara and get your infrastructure solution ready for tomorrows challenges.

We recently rolled out some updates across our VMware ecosystem integration portfolio and I thought would use the opportunity to refresh our Hitachi customers / prospects on the integration possibilities that we provide. In addition to updating six of the existing integrations, we also announced three new integrations for VMware, two of those supported by Hitachi Data Instance Director (HDID), our backup and copy management software.

 

  • Hitachi Infrastructure Management Pack for VMware vRealize Operations v2.0 - New
  • Hitachi Storage Connector for VMware vRealize Orchestrator v1.5
  • Hitachi Storage Content Pack for VMware vRealize Log Insight v1.3
  • Hitachi Storage Provider for VMware vCenter (VASA), v3.5.3
  • Hitachi Infrastructure Adapter for Microsoft® Windows PowerShell v1.1
  • Hitachi Storage Plug-in for VMware vCenter v3.9
  • Hitachi Data Instance Director (HDID) connector for VMware vRealize Orchestrator (vRO) - New
  • Hitachi Data Instance Director adapter for VMware Site Recovery Manager (SRM) - New

 

To recap the intent and focus on these integrations is to continue to give customers that single pane of glass experience with vCenter / vRealize / vCloud management stack while ensuring visibility, service access and automation control over Hitachi data center infrastructure and software.

ecosystem integrations.png

The new Infrastructure Management Pack for VMware vRealize Operations extends our original storage management pack and introduces initial support for our compute/converged offering Hitachi Unified Compute Platform (UCP) while continuing to support our Hitachi Virtual Storage Platform (VSP) storage in generic compute-VMware environments. In this version, we included visibility into Hitachi compute resources, including alerts for blade based compute systems. We leverage vRealize Log Insight to gather SNMP alerts and syslog which are then selectively passed to vROPS. We also extended the storage monitoring aspects by delivering smarter more intuitive dashboards for storage capacity management. It also addresses processor utilization in VMware environments, giving proactive awareness and guidance on addressing any allocation or utilization imbalances across the many storage processors (MPUs) in VSP enterprise storage systems

 

From a vRealize Orchestrator perspective, which is becoming key component of our customer's automation and self service delivery of infrastructure journey, we added 20 new vRO workflows to our storage connector based on customer feedback. These include enhanced end to end vSphere Cluster datastore provisioning, reclaiming unused LUNs and storage UNMAP reclamation workflow to automate UNMAP/zero page reclaim for VMFS5 datastores. (Customers using VMFS6 or VVol datastores get the advantage of vSphere's automated UNMAP)

 

We also introduced additional vRO connector workflows focused on backup and copy data management services packaged as part of our HDID software.

vro-hdid2.png

These workflows are natively exposed in vCenter via right click on VMs and also the real power of these workflows becomes very apparent when exposed in vRealize automation as custom actions on provisioned VMs. The workflows are focused on enabling customers to take and use space efficient storage backups for pinpoint recovery services or test/dev/analytics use cases. Now application owners/admins after requesting and getting their particular App service provisioned from vRealize automation or vCenter, can now take advantage of custom actions to take and access point in time backups/space efficient clones of their VMs .They can selectively create new test/dev copies of that VM data from any time in the past for analytics purposes. Even better, is the ability for SQL admin for example to access and mount previous versions of their database into same production VM instance (via mount VMDK workflow) to do data comparisons or surgical data extraction when required.

 

mount VMDK's.png

 

The updated Hitachi Storage (VASA) Provider for VMware addresses customer and security improvements plus has been extensively tested with vRealize Automation+vRealize Orchestrator to provide access to storage infrastructure policy selection (aka SPBM) to catalog services. The example below shows how our Hitachi Enterprise Cloud is exposing access to infrastructure and backup requirements as part of a LAMP stack provisioning request. In a nutshell, we have effectively integrated latest releases of vRA with vRO SPBM plugin accessing Hitachi VASA Provider for both VVol and VMFS customers. Pretty cool. I will get one of my colleagues to publish that effort back to github shortly so others in the community can take advantage of it

 

vra and SPBM.png

 

On the VMware Site Recovery Manager front, we are introducing SRM adapter (SRA v3.0) based on HDID while continuing to maintain our v2.x SRA for environments without HDID. The HDID software adds tremendous usability and operational time improvements in large scale multi datacenter environments or environments with generalist admins that prefer tag, drag and drop approach to backup/DR configurations. Enabling datastore replication is a simple as placing a tag on that new datastore and HDID auto-magically creates that pair relationship between datacenters. The SRA will support all mainstream SRM versions (6.1,6.5,8.1) and you can review the broader HDID benefits as extolled by Rich in his recent HDID blogs

 

I will have to devote a separate blog to expand on these and the other integrations we enhanced including vCenter plug-in , Powershell adapter and Log Insight content pack that I didn't get to in this blog.

 

We are constantly delivering on our strategy to provide premier experience for customers in VMware ecosystem and encourage our customer/partner ecosystem to take advantage of these as you step along in the journey to delivering self-service and simplified operations as part of private/hybrid cloud environment.

 

Download Links

Hitachi Vantara Support Portal; VMware adapter section

or VMware Marketplace subspace for Hitachi Vantara solutions

 

Informational links

Hitachi Vantara website for VMware Ecosystem Integrations
Hitachi Data Instance Director

achievement-3408115_640.jpg

 

Check out the latest blog that CTO Paul Lewis shared as we gear up for Hitachi Solutions for VMware's #VMworld 2018.

 

I Call Dibs on MODERNIZE IT

Hitachi has delivered a software, which is Hitachi Infrastructure Analytics Advisor (HIAA).  HIAA delivers visualization, intelligence and automation to optimize infrastructure health while quickly identifying and troubleshooting performance issues.

In my first post, I introduced a use case to determine and solve performance bottleneck in Hitachi Unified Computing Platform CI (UCP CI) environment.

(Reference: https://community.hds.com/community/products-and-solutions/vmware/blog/2017/10/11/end-to-end-infrastructure-performance-analysis-of-hitachi-unified-compute-platform-ci-simplified-with-hitachi-infrastructure-analytics-advisor-hiaa)

 

Hitachi Infrastructure Analytics Advisor – Hitachi Data Center Analytics SaaS edition (HIAA – HDCA SaaS edition)

Now HIAA – HDCA SaaS edition is available to you. This is a new consumption model of HIAA. Hitachi provides a HIAA instance running on the Public Cloud. But HIAA – HDCA SaaS edition provides same user experience as well as on-premises version of HIAA.

Picture1.jpg1. Ready to Use

HIAA – HDCA SaaS edition relieves installation process. Hitachi provides the configured HIAA server instance on Public Cloud. Customer can start managing the system using HIAA GUI instantly when user obtain access information. Customers do not have to prepare HIAA server.

    • No Physical/vm-based server required
    • No installation and configuration

 

2. Maintenance Free

HIAA – HDCA SaaS edition is guaranteed its availability. Hitachi observes HIAA instance and keep it alive. Thanks to this service, customers can focus on managing their system.

Hitachi manages:

    • Instance Management
    • Version Management

 

3.  Subscription model (SaaS)

In HIAA – HDCA SaaS edition, subscription price is determined by the amount of performance information customer wants to upload. Thanks to this pricing model, customers can start using this solution in lower cost than the full upfront payment model.

 

Combination of HIAA – HDCA SaaS edition & UCP CI

HIAA – HDCA SaaS edition is better solution for UCP customers those who prefer low-touch infrastructure management. Because customers do not need to care HIAA – HDCA SaaS edition instance. Hitachi provides management service of HIAA – HDCA SaaS edition instance.

 

HIAA – HDCA SaaS edition provides server-less model to customers. This means customers can implement more higher density of user application and VMs.

 

System Overview

This is the system overview. Basic components, which are Probe server and HIAA Server, are same as on-premises version of HIAA. But HIAA server move to Public Cloud. Generally, most of customers protect their system by the firewall.

 

Preliminary

1. Network configuration

a.    The Probe server communicate with the HIAA server via HTTPS protocols. (No other protocols are required.) Please negotiate your network administrators to allow outbound connection HTTPS port 443. For example, changing Firewall setting or though HTTPS proxy.

 

b.    In Public Cloud side, Web Application Firewall (WAF) allow to connect from user network. Hitachi have to list the public IP address of customer network. Please contact Hitachi representative.

 

Picture2.jpg

 

2.    Access information from Hitachi

Hitachi provide HIAA – HDCA SaaS edition access information to customers.

    • HIAA Server information to upload performance information
    • HIAA GUI access information to login console

 

Example Configuration

I am going to introduce an example configuration, which is HIAA – HDCA SaaS edition manages UCP CI in the on-premises data center.

 

1.    Configuration in on-premises data center

There are same processes to install and configure in the on-premises side. (Reference: https://community.hds.com/community/products-and-solutions/vmware/blog/2017/12/29/part2-end-to-end-infrastructure-performance-analysis-of-hitachi-unified-compute-platform-ci-simplified-with-hitachi-infrastructure-analytics-advisor-hiaa)

 

The only difference point is to switch data upload server into HDCA server which is a part of components in HIAA – HDCA SaaS edition. The probe server in on-premises have to connect HDCA. Once customer login to probe server, then input HDCA server information.

 

Picture3.jpg

After a few hours, configuration and performance data will be appeared in GUI screen.

 

2.    HIAA – HDCA SaaS edition configuration

As I mentioned before, customers can start to use HIAA – HDCA SaaS edition immediately. Here, I recommend making a “Consumer Group”. This is a nice feature to identify pods later.

 

Picture4.png

Use Case

1.    Generally, administrators use vCenter to observe system health. If unexpected workload raise, administrator will notice that noisy neighbor issue happed. In this situation, administrator will check system performance in vCenter performance monitor. For example, CPU, Data Store and so on.

Picture5.png

 

2.    On HIAA GUI dashboard, there are an alert on the storage system.

Picture6.png

 

3.    Breakdown of the storage system, there are alerts on Cache and Parity Groups. Administrator have to find a root cause of higher CPR rate.

Picture7.png

 

4.    At this moment, HIAA provides analytics feature of bottle neck. BasePoint is the origin point that Administrators start to investigate. Now we are going to start from Cache because we need to check why CWP is so high.

Picture8.png

 

5.    HIAA shows suspects of higher workload. What volume has much high workload.

Picture9.png

 

6.    One more approach, HIAA can show how busy Parity Groups works. In this use case, to keep up user’s workload, Administrator decides to install more HDDs to enhance drive performance. Because Parity Group Utilization shows exceeds 80%.  This shows Parity Group 04-03 and 04-04 is busy at this time.

Picture10.png

 

7.    What HDP pool should be added more HDDs? HIAA can show where PG should be installed.

Picture11.pngThis shows that overload PG (04-03) belongs to Parity Group belongs to Pool14.

 

8.    After installing HDDs, Auto Rebalance will start working to distribute workload evenly.  For a while, Auto Rebalance will continue working. After that, the system performance will be boosted. User can experience enhanced performance.

Picture12.png

Conclusions

In this article, I introduced HIAA – HDCA SaaS edition. HIAA – HDCA SaaS edition is working on the public cloud. Hiachi prepare it for customers then the customer can start to use instantly. This provides lower OPEX and CAPEX of performance management to customers. Meanwhile, HIAA – HDCA SaaS edition provides same user experience as well as on-premises model.

hitachi_vmware.pngWe recently rolled out some software upgrades for our VMware ecosystem integrations and certifications which are generally available now or will be within the next 45 days. The specific products updates this month are Storage (VASA) Provider, vRealize Operations (vROPS) Management Pack and Storage Plugin for vCenter. This is in addition to updates to our flagship UCP Advisor software which provides provisioning and lifecycle management for infrastructure and resources from within vCenter. As always, the intent is to continue to empower IT roles who leverage the VMware vCenter/vRealize management stack for operations and automation with native integrated access to services, capabilities and data from Hitachi Infrastructure. I've outlined some of the high level advancements in the respective integration below.

 

Note, all Hitachi integrations are now posted on VMware marketplace for customers/partners to download.

 

Storage (VASA) Provider v3.5.0/v3.5.1

As a refresher, the Hitachi VASA provider software integration enables storage capability aware policy based management for VMFS/VVol while also enabling a VVol deployment.

 

One of the major new features additions in the 3.5 release is automating QoS or similar actions when storage policies are changed on a VM or datastore in order to bring it into compliance with that new policy. We focused our initial policy compliance efforts around our Hitachi Active Flash and data tiering (HDT) pooling technology which are used quite frequently in VMware environments. It consists of multiple tiers from both internal and external storage (an example might be FMD, SSD within Hitachi VSP and 3rd tier being a virtualized external 3rd party flash array from Pure/EMC etc). This pooling technology automatically moves or pins data blocks between tiers based on data access rules but there are cases where finer grained control for Application owners/VM admins is beneficial, whether its related to expected application usage behavior change or cost controls.

 

With this release of VASA Provider, when an administrator applies a new policy to a VM/VMDK or datastore, the Hitachi VASA Provider will initiate storage changes to bring that object into compliance.  One example of this is tiered data placement within our pool. If certain VMs/VMDKs or datastores are set with "Tier 3 Latency and Tier 3 IOPS" policy capability within vCenter, the system will automatically move those blocks to that lower tier freeing up higher tiers for new net applications or that high performing database that is growing in size.   Similarly, that application that now requires both "Tier 1 Latency and Tier 1 IOPS" is simply applied that policy by VM admin (or App owner). The Hitachi VASA software will invoke actions to pin/promote all blocks to that highest performing tier. We have made this capability available for both VMFS (taking advantage of custom tiering policies) and/or VVol datastores. VMware VVol will obviously allow finer grained granular control at VM/VMDK level given its object based implementation. This was the additional motivation to allow a level of Hitachi infrastructure resource control to be accessible to application owners (not just VM admins) through API/vRO/vRA Catalog services.

 

hdt+tiers-combo.png

This release also includes support for vSphere 6.7, environments with multiple SSO domains and configurations without external service processor (SVP). Also worth noting that the latest editions to Hitachi VSP All Flash storage platform powered by SVOS RF, now officially supports up to 8X increase in number of vSphere Virtual Volumes (VVols). So whether using Hitachi Storage, UCP Converged or Hitachi enterprise cloud environment, download and try the free virtual appliance and take advantage of it.

 

vROPS Management Pack v1.8

In this updated release of the Hitachi Storage Management Pack, we have introduced a new troubleshooting storage dashboard to identify and remediate potential risks/issues quickly within vROPS. This dashboard walks operations teams through 12 key questions and metrics that we have determined from past support experiences are helpful to get to quick resolution to VM-Storage potential root cause issues. It covers key health areas such as cache write pending levels, I/O port utilization, latency and storage processor busy metrics within dashboard for full correlated selected timeline view.

 

troubleshooting.png

 

We have also made improvements in capacity savings dashboard for customers to easily visualize space savings with deduplication and compression deployed in their all-flash Hitachi storage and UCP converged systems. Administrators can now visually see answers to questions related to deduplication ratio, data reduction savings and capacity trend based on current space efficiency rates continuing.

 

capacity+savings.png

 

This releases also supports vROPS 6.7 and  VMware’s vRealize Suite Lifecycle Manager (vRSLCM) for automated management pack updates. This will allow customers  that.leverage vRSLCM to automatically receive notification of partner management pack updates and have those updates be downloaded directly to their system from VMware's (VSX) marketplace.

 

Storage Plugin v3.8

While UCP Advisor is morphing to be the flagship management integration for all Hitachi infrastructure, we continue to evolve the storage plugin for vCenter. This release includes substantially improved datastore provisioning times on our recently announced next-generation VSP storage platforms using latest API integration. It  also fully supports provisioning against deduplication and compression based storage pools.

 

Other related Hitachi-VMware updates

From a VMware certification point of view:-

  • Site Recovery Manager (SRM):- Hitachi SRA v2.3.1 is now certified with VMware's SRM 8.1 release.
  • VMware vSphere Metro Storage Cluster (vMSC):- Hitachi VSP platforms certified to support vMSC configurations with iSCSI connectivity with vSphere 6.5 and with vSphere 6.7 in process. Check kb article for updates

 

Stay tuned as we continue to evolve these and other VMware ecosystem integrations.

 

Referenced Links

  • Hitachi VSP All Flash Storage Platform: a flash powered storage platform offering 100% data availability solution for VMware
  • Hitachi Unified Compute Platform (UCP) CI: a factory-built and tested package of compute, storage, and networking that when combined with VMware vSphere virtualization platform creates the best foundation for apps, cloud and business.
  • Hitachi UCP HC: an all-in-one hyperconverged system that combines the strength of Hitachi UCP with VMware vSphere, vCenter and vSAN for simplicity and IT agility.
  • Hitachi UCP Advisor: automated management and orchestration software provides single pane of glass visibility across, compute, network and storage. Tight integration with VMware vCenter means organizations don’t have to learn new systems, buy additional software or undertake manual, time consuming integration.
  • Hitachi Enterprise Cloud (HEC): a pre-engineered solution for VMware vRealize that provides a public cloud experience with private cloud security and enterprise-class service levels.

Software is often improved by providing version up software. Of cause, Hitachi provides software version-up.

So far, I have used the combination solution of HIAA and UCP CI. UCPandHIAA.jpg

I determined to install HIAA 3.2 instead of 3.1. HIAA provides scripts for no hustle upgrading. Let me show you overview of upgrading version3.1 to 3.2 in my case.

(Please follow the procedure which provided in the User Guide when you perform to upgrade your system.)

Picture1.jpg

Upgrade installers are provided for both Windows and Linux environment. I am going to introduce the workflow for Linux host.

 

0. Overview of upgrading process

There are four steps in upgrading process.

process.jpg

1. Copying the files

You need to obtain the software media. Then, the files should be copied to the host. I copied the files using scp. You can any way to copy the files.

 

Type these command on the laptop.

#scp -r /mount-point/ANALYTICS root@HDCA_host:/root/

#scp -r /mount-point/DCAPROBE root@probe_host:/root/

 

Drawing1.jpg

 

2. Stop Service and backup configuration

Don't forget to backup the settings of your environment. Please refer the users guide how to stop HIAA/HDCA/Probe server services.

To back up HIAA server, backupsystem command is available. This command copy the configuration files. You also need to copy some other files in HDCA server and Probe server. User guides mention which files should be copied.

 

3. Run scripts

Here we go. It is time to run upgrading scripts.

3.1 HIAA/HDCA server

Before running upgrade, you must save the certification file. Please follow the detailed instruction in the user guide.

Then, run the upgrade scripts.

# cd /root/ANALYTICS    (Destination directory which you put files in section1.)

# ./analytics_install.sh VUP

Wait until some message come up on screen.

 

3.2 Probe server

Before running upgrade, you must save the certification file. Please follow the detailed instruction in the user guide.

Then, run the upgrade scripts.

# cd /root/DCAPROBE    (Destination directory which you put files in section1.)

# ./dcaprobe_install.sh VUP

Wait until some message come up on screen

 

4. Post upgrade

After finish the all upgrade process, you can access HIAA website. But before you access, please remember clearing the browser cache.

 

These are all step that I did. If you perform upgrade please read carefully and follow the steps in the user guide.

For your reference, I introduce some links.

Upgrade your Infrastructure Analytics Advisor environment - Hitachi Vantara Knowledge

Infrastructure Analytics Advisor 3.2.0 Documentation Library - Hitachi Vantara Knowledge

 

Thank you for your time. See you next article.

Hi, it’s time for Part2. I will show you what I did in set up procedure and use case.

 

(Part1: https://community.hitachivantara.com/community/products-and-solutions/vmware/blog/2017/10/11/end-to-end-infrastructure-performance-analysis-of-hitachi-unified-compute-platform-ci-simplified-with-hitachi-infrastructure-analytics-advisor-hiaa)

 

Environment Preparation and HIAA installation

I set up the configuration in our Lab. I am going to write some workflow about HIAA with UCP CI particularly.

 

Picture1.jpgFig1: Logical Configuration

 

UCP CI Preparation

 

  • Create a LDEV as Command Device on VSP G600

HIAA Probe communicate with VSP G600 Command Device to retrieve storage information including configuration and Performance statistics. This requires FibreChannel connection between VSP G600 and the server running HIAA Probe server VM.

 

  • Install Brocade Network Advisor

HIAA Probe cannot communicate with Brocade SAN switch directly. If you would like to check Brocade Switch performance, you need Brocade Network Advisor (BNA) to retrieve information.

 

HIAA Installation & Initial setup

There are two options to install HIAA. One is deployment of OVA virtual machine image.  Another option is using installer on a host.

 

My choice was first one (OVA file deployment). I created three VMs as Fig1. VM1 is for HDCA/HIAA server. VM2 is for Probe server. VM3 is VM to run windows. This is the host OS to install BNA.

 

Further detail HIAA installation instruction is shown below.

https://knowledge.hds.com/Documents/Management_Software/Infrastructure_Analytics_Advisor/3.1.0/Install_Infrastructure_Analytics_Advisor/Installing_HIAA_in_an_OVA

 

After installing HIAA, I added target probe. Probe is a kind of module to retrieve information from target machines. Probe for Hitachi Storage, BNA, vCenter are available.

https://knowledge.hds.com/Documents/Management_Software/Infrastructure_Analytics_Advisor/3.1.0/Add_probes_to_Analytics_probe_server

 

If you don’t see any information on HIAA without any error, I recommend you wait an hour. It may take time to hand over. Because retrieved information is hand over across three servers (Probe – HDCA – HIAA).

 

Grouping of components, “Consumer”

At the time of registering components into HIAA, HIAA recognize them as just individual resources. “Consumer” is a great feature of grouping resources. I made a Consumer which groups UCP CI resources. This provides administrator clear recognition of UCP CI converged system.

 

Screen Shot 2017-12-28 at 3.42.10 PM.jpgFig2: Inventory List (not yet grouping)

 

Screen Shot 2017-12-28 at 3.41.54 PM.jpgFig3: Create Consumer screen

 

Use Case

I will show you an example of performance analytics use case for UCP CI with HIAA.

 

1. Finding performance Preblem in vCenter

In most cases, an administrator of virtualized environment manages the infrastructure with vCenter. Let's say the administrator found excessive high latency for storage (fc2) reported on the vCenter. (Fig4)

4.jpg

Fig4. Storage view in vCenter (Latency)

 

2. Problem Analytics using HIAA

Hand over to HIAA from this step. Login HIAA and check Dashboard.

5.jpg

Fig5. HIAA Dashboard

 

Alert "Critical" comes up in Dashboard. Then, Jump into E2E view.

6.jpg

Fig6. E2E View

 

This E2E view shows relationship of VM to LDEV. In this screen, HIAA shows Storage Processor and

LDEV are busy. HIAA E2E view shows VM and Host server is fine. These VMs use LDEV and MP in

G600 which is marked as Critical.

Let's back to vCenter, then open VM Monitor tab. We can see Disk performance mounted on VM.

Disk of VM performance was actually dropped(For example, VM fc-02). Meanwhile, example other

VM fc-18 started Write IO to its disk. I would like to improve performance, but also want to keep

both IO fc-02 and fc-18.

7.jpgFig7. Disk of VM fc-02 performance (from VM performance view)

 

8.jpg

Fig8. Disk of VM fc-18 performance (from VM performance view)

 

Let’s drill down to find bottleneck.

 

3. Bottleneck investigation

First, I checked Storage performance. From screen below(Fig9), only MPU-10 is busy. This issue must be occurred uneven assignment of MPUnit. Workload of MPU-10 is above critical line (Red line). The other MPU do not work now. In this example, MPU-10 is primary bottleneck.

9.jpg

Fig9. Sparkling view

 

4. Performance improvement

Next step, what we can do to solve overload situation in MPU-10? Primary option is offloading workload to other MPUs.

I have to change MPU assignment configuration. This operation makes workloads distributed to other MPU.

Then, we need to identify what LDEV should be moved? Candidate LDEV are shown as below. (Fig10) Click busy MP, HIAA shows LDEVs related with MPU.

10.jpg

Fig10. Relationship LDEV with MPU

 

I distributed assignment of MP Unit across all MPUnit. Then all MPU started works evenly. (Fig11)

11.jpg

Fig11. After resolving MPU-10 overload

 

Finally, MP overload was resolved.

12.jpg

Fig12. All resources are fine

 

Conclusion

I introduced the value of combination HIAA and UCP CI. In the use case section, I showed you one of

the examples to improve performance issue in UCP CI environment.

I hope you can enjoy HIAA and UCP CI solution. Thank you for your time to read.

 

Please refer further information below,

HIAA:

https://www.hitachivantara.com/en-us/pdf/solution-profile/hitachi-solution-profile-it-analytics.pdf

 

Videos on YouTube:

"Detecting Performance Bottlenecks using E2E view in Hitachi Infrastructure Analytics Advisor"

https://youtu.be/LkDoO3MA1x4

 

"Dynamic Threshold Storage Resource Monitoring With Performance Analytics, Using HIAA"

https://youtu.be/9WlpUx8inNA

 

"Using HIAA to Analyze a Performance Bottleneck in Shared Infrastructure"

https://youtu.be/fGFj7lLiYX4

 

"Detecting Performance Bottlenecks Using Sparkline View"

https://youtu.be/VTezCGUniR8

 

UCP CI:

https://www.hitachivantara.com/en-us/pdf/datasheet/hitachi-datasheet-unified-compute-platform-ci.pdf

Hitachi Vantara has launched the new converged infrastructure Hitachi Unified Computing Platform CI (UCP CI). Today, I would like to introduce the performance analysis solution with UCP CI.

 

Hitachi Infrastructure Analytics Advisor (HIAA) delivers visualization, intelligence and automation to optimize infrastructure health while quickly identifying and troubleshooting performance issues. UCP CI is an optimized and scalable converged infrastructure platform. In this series of posts, we will cover use cases of what can be done with HIAA and UCP CI together.

 

Fig1 shows an example of an end-to-end (E2E) map, which is showing topology of specific running VM to connected switch to used storage LUN.

 

Picture1.jpg

Fig1: HIAA E2E View

 

In this series of posts, we will cover:

  • Introducing the combined solution of HIAA & UCP CI
  • Installation & Configuration
  • Introducing use cases

 

Hitachi Infrastructure Analytics Advisor (HIAA)

Hitachi Infrastructure Analytics Advisor (HIAA) includes the tools to properly monitor and analyze performance statistics from the application through its entire data path to the shared storage resources. Generally, Converged Infrastructure, like UCP CI, provides easy management to customers. Meanwhile Converged Infrastructure conceals detailed of infrastructure. This makes troubleshooting difficult.

 

The features of HIAA provide solutions against these pain-points.

 

Some of the key features include:

  • Monitoring Switch, OS, Hypervisor
  • E2E Topology mapping
  • Performance comparison and related changes
  • Identify the bottleneck and root cause analysis

 

More information:

https://www.hitachivantara.com/en-us/pdf/solution-profile/hitachi-solution-profile-it-analytics.pdf

 

Also, the HIAA team has posted great videos on YouTube. Check them out!

 

Detecting Performance Bottlenecks using E2E view in Hitachi Infrastructure Analytics Advisor

https://youtu.be/LkDoO3MA1x4

 

Dynamic Threshold Storage Resource Monitoring With Performance Analytics, Using HIAA

https://youtu.be/9WlpUx8inNA

 

Using HIAA to Analyze a Performance Bottleneck in Shared Infrastructure

https://youtu.be/fGFj7lLiYX4

 

Detecting Performance Bottlenecks Using Sparkline View

https://youtu.be/VTezCGUniR8

 

Analyze Configuration Changes in Your Infrastructure to Solve Performance Problems

https://youtu.be/NzMhSeLdOQ8

 

(Updated on 10/19) HIAA v3.2 is now available. HIAA v3.2 supports integration with Hitachi Storage Management Pack for VMware vRealize Operations(vROPS) v1.7. Thanks to this integration, vROPS retrieves storage performance, capacity and related health metrics from HIAA. Note, Hitachi Tuning Manager is no longer supported and management pack is available from VMware marketplace.

 

 

Hitachi Unified Computing Platform CI Series (UCP CI)

Hitachi Vantara has launched UCP CI in September 2017.  This is a new series of the Converged infrastructure of Hitachi. The UCP CI architecture consists of Intel-based rackmount servers, Hitachi Storage and Switches.

 

UCP CI Components Overview:

  • Hitachi Advanced Server DS120
  • Brocade G620 SAN Switch,
  • Hitachi Virtual Storage Platform(VSP) Hybrid and all-flash arrays. (G/F1500, G/Fx00)

 

More information:

https://www.hitachivantara.com/en-us/pdf/datasheet/hitachi-datasheet-unified-compute-platform-ci.pdf

 

Combination of HIAA & UCP CI

The combination of HIAA with UCP CI provides many benefits to customers running UCP CI virtualized environment.

UCP Advisor is the management software sold with UCP CI that simplifies configuration and management of the UCP CI converged infrastructure.

 

HIAA provides an additional value for customers with the ability to monitor, analyze and troubleshoot system performance issues by showing an end-to-end topology and system-wide relationship of hardware and software components.

 

In addition, HIAA can show detailed performance statistic information of the entire UCP CI stack ranging from storage, SAN and hypervisor (VMware).

 

The dashboards and charts are extremely helpful for absorbing large amounts of performance related information in an organized and simplified manner.

Picture2.jpg

Configuration

This is an overview of the configuration built in our Solution Lab.

 

UCP CI

  • Hitachi Advanced Server DS120
  • Brocade G620 (via Brocade Network Advisor(BNA))
  • Hitachi VSP G600 (SVOS 83-04-23-40/01)
  • VMware vSphere 6.x, vCenter 6.x

 

Extra Software

  • Hitachi Performance Analytics 3.0 (HIAA 3.1 and HDCA 8.1)
  • Brocade Network Advisor (BNA) 14.0.1 (To observe SAN Switch performance, BNA is required.)

 

Picture4.jpg

Fig2: Configuration Overview

 

Free Trial License available

We can provide a free version of HIAA for customer trial. There is not functional limitation but it expires 90 days of installation. If you are interested in the trial license, please contact HIAA PM D-List or the author (Koji Watanabe).

 

Also, you can obtain the 120 days trial version of BNA from Brocade Website.

 

What's coming up next...

Today, I have introduced the value of combination of HIAA and UCP CI. These two products provide Low-touch infrastructure and easy analysis of performance management.

 

I will show you "Performance troubleshooting using HIAA" in the second post of this series. Stay tuned!

(Part2: End-to-End infrastructure performance analysis of Hitachi Unified Compute Platform CI simplified with Hitachi Infrastructure Analytics Advisor (HIAA) )

Last week I was out in Las Vegas at VMworld 2017 - An incredible event for both VMware and for us at Hitachi! At a high level VMware clearly demonstrated that not only is Private Cloud is accelerating but Hybrid Cloud is now a reality and the future rests on cross-cloud services tied to network and security virtualization.

 

Beyond the hype (after all this is Las Vegas...) its clear that both the Private and Public Cloud are maturing quite quickly and that enterprise clients are looking to accelerate from Strategy to Execution.  While some initial thoughts around the cloud centered around cost savings its clear today that the real gains come from the Agility associated with Private/Hybrid Cloud. Being able to "Run any application, in any cloud on any device" provides enterprises the opportunity to build and run their applications across a wide variety of infrastructure, platform and consumption models driving increased flexibility and more rapid innovation. Most importantly it gives enterprises the flexibility to develop applications on a variable cost basis with the flexibility to bring them back in-house should business requirements change.

 

For more thoughts on the VMworld show and my personal reflections on the future please visit "The Clouds are Clearing...VMworld 2017 Reflections and Predictions"

 

I'd also encourage you to read my colleague Bob Madaio's thoughts "A (mostly) Grown-up Take on VMworld"

 

So what does it mean to Hitachi? Well, the maturation of Private and Hybrid Cloud is exciting because it enables us, at Hitachi, to enhance the depth of the relationships with our clients. Specifically, as it relates to VMware and cloud adoption we leveraged the show to demonstrate 3 key offerings:

 

  1. To Accelerate Private Cloud - Hitachi's NEW Unified Compute Platform (UCP) offerings powered by VMware Cloud Foundation and allowing customers to simply deploy their private clouds on VMware Cloud Foundation in either a Hyperconverged or Rack-Scale footprint
  2. To Accelerate Hybrid Cloud Adoption - Hitachi's Data Services vision powered by Hitachi Content Intelligence and Pentaho Analytics offering centralized governance, analytics and compliance across multiple clouds -  If you are interested in better understanding our perspective on Compliance and Governance of Data today, tomorrow and into the future Ild encourage you to read our CTO; Hu Yoshida's blog "New Data Sources and Usage requires New Data Governance"
  3. To Drive a Lower Cost and Lower Risk to End-User Computing - Hitachi's Content Platform allowing for "Smart Home Directories for VDI" lowering the operational cost and risk of virtual desktop infrastructure

 

The vision of cloud agility is finally coming to life and Hitachi is excited to be at the forefront of solutions that accelerate deployment.

With the recent announcement of our VMware Cloud Foundation (VCF) powered UCP RS system to deliver a hybrid cloud reality (check Dinesh's blog here for details), one of the interesting questions from early prospects is advice or guidance on how others are managing a hybrid private environment which consists of a traditional VMFS environment (and lately VVol) as they bring VMware vSAN based architectures into their environments. The basis for this question or the outcome they want to meet is to provide a pool of resources accessible to the various line of business or application teams which should provide different characteristics while providing those consumers with some level of intuitive control on where their assets will run to ensure they can meet their intended SLAs.

 

Giving the topic of Hitachi UCP RS and its VCF foundation, Amazon services come to mind.ucp rs and vcf.png

Here are some Amazon EBS Storage options to give a perspective on why this will be important in your VMware powered private hybrid cloud designs.. Each separate EBS volume can be configured as EBS General Purpose (SSD), Provisioned IOPS (SSD), Throughput Optimized. (HDD), or Cold (HDD) as needed. They have stated that some of the best price/performance balanced workloads on EC2 do take advantage of different volume types on a single EC2 instance. For example, they mention they see Cassandra using General Purpose (SSD) volumes for data but Throughput Optimized (HDD) volumes for logs, or Hadoop using General Purpose (SSD) volumes for both data and logs. This level of differentiation is first step in providing tiers of service to consumer of cloud resources.

               Source: AWS Storage Options

 

But again, performance is just one layer. There are many characteristics when it comes to SLAs. Take the "availability" characteristic. As you may know, because an EBS volume is created in a particular availability zone, the volume will be unavailable in other availability zones if original availability zone itself became unavailable. Resources aren't replicated across regions unless you do so specifically. Again, that might be an important characteristic to an app service being rolled out (To be fair to AWS, they recommend creating snapshots as snapshot of volume(s) are available across all of the availability zones within a region)

 

This is an area that I've put some cycles into with the team when we defined the requirements around the latest release of our Hitachi VASA Provider (VP) version 3.4 to operationally enhance the right consumption of resources for vSAN, VMFS and/or VVol. Based on the VVol/SPBM program, we took advantage of some of the storage container concepts and latest tagging capabilities in vSphere 6.x to provide a better experience. With the latest Hitachi VP software, VMFS datastores (that may be adding additional datastore resources to an existing VCF based vSAN deployment or separate traditional VMFS environment), will be automatically tagged in vCenter with their specific SLA including cost characteristics. Click to enlarge GIF below to get a perspective of how the new VP WebUI (and API) provides the facility to assign capabilities to infrastructure resources, including automated vCenter tagging of VMFS datastores while allowing vSAN datastore(s) to be similar tagged with appropriate category capabilities. The end result is much more intuitive description of the resource capabilities available across vSAN, VMFS and VVol.

 

WebUI and tags.gif

With this automated tagging of capabilities to existing and new datastores, vSphere policies can now be much richer and descriptive to consumers. Click to enlarge animated GIF below as it rolls through a typical vSphere policy, in this case a policy describing  "Tier 1 Performance and DR Availability" with rulesets for VMFS, VVol and vSAN within the same policy. In my lab environment, this policy with its Tier 1 performance, Tier 2 availability and lowest cost capability found matching storage on all three entities allowing consumer to pick one of choices

 

Policies with tags.gif

 

The VMFS datastore highlighted below was configured to provide the highest level of availability and performance (GAD multi-datacenter active-active replicated enabled LDEV using accelerated flash on F1500 with data at rest encryption) and the VP software automatically tagged the corresponding datastore with the following capabilities; Tier 1 availability and performance, encryption and cost between 750 and 1000 units. This datastore would be a match when app owners or admins selected the "Tier 1 Performance, Encrypted and Active-Active availability" policy which in my lab environment ruled out vSAN or VVol as potential targets.

 

 

Taking the Apache Cassandra application example from Amazon, which I wanted to deploy on the VCF powered UCP RS system. During provisioning, I assigned the appropriate application owner understandable policy for each of the disks:-  the high performance data disks for Cassandra VM with lower capacity landed on the vSAN datastore, while the log disk, 10x the size, landed on the iSCSI VMFS datastore. I didn't consume unnecessary storage from my all-flash vSAN as the VMFS datastore (and VVol datastore) was a suitable match for the characteristics for the log data in this example. There is so much more that can be exploited when you think of these capabilities can easily be extended and expressed for other infrastructure resources.

 

 

In summary, when it comes to provisioning resources, whether its from vSphere Client or vRealize Automation with its SPBM awareness, these richer policies are select-able to ensure appropriate resources are selected at VM level or indeed VMDK level. Taking a leaf out of Amazon's trees in EC2, this is the type of resource variability and ease of consumption needed to run a sustainable cloud environment meeting diverse needs across many application services as you update and modernize your infrastructure.

 

Check out the live demonstration of VCF powered UCP RS and Hitachi VASA (VP) Software at #VMworld 2017

Traditional-vs-Contemporary-Banking-Image-HighRes-a5.jpeg

I've recently moved from Horizontal Platforms to Vertical Solutions and I feel it might be a good time to revisit one of my old posts (Can we please stop telling Digital Enterprises to “act like a startup”?) and look at how this applies to one of my core customer segments: Retail banking.  Specifically, let's look at how their business differs from the Fintech startups and how to apply the three Digital Innovation practices (Infrastructure Modernization, Digital Workplace and Business Insight):

 

Practice 1: Infrastructure Modernization - "How can I run my apps more efficiently and deliver innovation faster?"

Retail banking needs to provide a full portfolio of services to its customers and not all of these are profitable.  By comparison, Fintech startups can choose to offer just the profitable services (e.g. Payments).  In order to stay in business these banks are forced to think about their applications in two categories:

  • Run the Bank (Core Banking systems and Mode 1) - These systems are important to the bank's reputation and their customer's experience but the services that run on them are not highly differentiated or very profitable.  These apps are often scale-up, fragile and changes are tightly controlled.  Core IT looks for opportunities to reduce costs by improving efficiency while still ensuring service levels are maintained.
  • Change the Bank (Digital Banking and Mode 2) - This is where LOBs focus their incremental investments.  These systems are focused on delivering new digital experiences to customers that will help the bank compete with the Fintech startups.  The focus here is on innovation and speed of time to market and the apps that run here are designed using modern scale-out web-ready methodologies.  These systems are resilient, auto-scaling and secure by design as they need to be able to face off to an unpredictable set of end user devices, third party providers and external threats.

There are good reasons to ensure strong isolation between these two parts of the bank.  The legacy systems are just not designed for the unpredictable workloads and volume of read/query activity associated with digital banking.  Mode One workloads are typically protected by perimeter security whereas Mode 2 workloads face off to a variety of end user devices, third party systems and external threats - the digital systems will therefore implement micro-segmentation and a variety of techniques to guard against DDOS, for example.

 

But there is another element that is often missed when rethinking the platform to support Bimodal Banking: the Data Integration layer.  Both of these sides of the bank still need to fit into a joined up multi-channel strategy and provide a seamless experience to the customer.  Both sides of the bank will form part of the customer 360 / KYC picture that the bank needs to implement.  Furthermore, the data in Mode 1 systems is often fragmented and these systems need to be insulated from unpredictable workloads and threats and so Mode 2 systems will typically implement a separate caching layer or operational data store.  The Data Bridge between Mode 1 and Mode 2 systems is therefore a key success criteria that will determine how rapidly the bank can deliver new experiences to their customer base.  We therefore see this as a key part of the Digital Innovation Platform.

 

...In the next part of this blog I will look at the next two practices: Digital Workplace and Business Insight

Ok, time for part 2. I'm back on and connected after a few days zip-lining and mountain biking through redwood trees and train tracks in northern California. As I was contemplating part 2, the biking time reminded me that infrastructure automation software end game is not too different. You want to spend the best quality bike time on the downhill adrenaline inducing sweeping single track through the trees versus the mundane paved path to the mountain..i.e. Let infrastructure work for you with automation rather than you tediously working the infrastructure to get better ROI from your quality time.

IMG_20170630_103003958.jpg

 

In Part 1 of this series, I started to peel back some of the well known UCP Advisor features that our customers are using when deploying our infrastructure automation software while sharing some of the updates we made in the most recent UCP Advisor v1.2 release. In this blog, I want to touch on aspects of networking mgmt, day 0 + day 90 administration and cool integrated data protection features.

 

So on to networking. I covered automating all the aspects around deploying storage datastores and compute ESXi hosts in the previous post and I wanted to complete the 3rd leg, the important networking management aspect. From a IP networking aspect, two key aspects I believe are VLAN management and topology views. When you update the VLANs on your distributed virtual switches, UCP Advisor provides an automated facility to synchronize VLANs to the top of rack and/or spine switches that make up your networking fabric. It also provides connectivity information so you can quickly determine the physical infrastructure connectivity topology between ESXi hosts and IP infrastructure. You can visualize some of this clicking on animated GIF below. Of course, firmware upgrade management which I'll chat about in part 3 is included for the networking switches.

 

network2.gif

 

Circling back to day 0 type operations from an administration perspective, most environments do/will end up with multiple appliances, whether its 30 satellite offices each with local needs or a datacenter with multiple UCP appliance pods for application, security and/or multi-tenancy requirements. UCP Advisor has a distributed model to manage multiple appliances from single vCenter including enhanced linked mode configurations. (vSphere 6.5 newly supported in 1.2 release). Each appliance or logical configuration has a dedicated control VM appliance (small Win2k16 based CVM) which allows the scalability to be only limited by vCenter max # of ESXi hosts which it can manage,  1000 at last check. Each appliance or logical system can be quickly on-boarded using CSV configuration to describe the appliance or new infrastructure elements (e.g adding a new chassis of compute on day 89) can be on-boarded using UI. The administration tab also covers aspects lack setting the schedule for automated backup of infrastructure config components, specifically the IP network and FC device configurations.

 

admin.gif

 

Speaking of data protection, UCP Advisor always provides integrated VM and datastore level operational backup and recovery capabilities when HDID software and its V2I component is recognized as being deployed. This is accessible through the data management services tab. With data protection moving to a snap and replicate model vs traditional backup to meet both scalability and fast self service recovery, I think this is an important inclusion. The ability to have every VM newly deployed to be automatically protected and ability to do full or granular recovery of VM data at the drop of a hat is key when you users need it, especially if its a multi-TB VM and time is money...The GIF visual shows you some aspects of this and more details on the VMware protection options from a previous blog I wrote a while back. For vSAN based UCP HC, HDID offers VADP based backup as well.

 

v2i.gif

In part 3, I'll free wheel home and close out to cover automated firmware management, physical workflows capabilities for bare metal support or custom infrastructure needs and some of the vRO and Powershell integrations that are available to further automate your cloud deployment with HDS UCP and UCP Advisor..  Feel free to drop a comment/questions on any aspect or what you would like to me to cover in more detail

We recently rolled out the latest release of UCP Advisor, v1.2, our flag ship infrastructure automation software for converged, hyper-converged and standalone storage.  In a previous blog, I included a longish voice over video which rolled through the various features but I thought I would take the opportunity to peel back the features in a shorter bites while also referencing the latest value features introduced in version 1.2

 

An essential element in converged automation is simplifying the operations and deployment of ESXi hosts, datastores and virtual to physical VLAN synchronization actions. These entities are what UCP Advisor calls virtual/logical resources. <Click animated GIF for visual>

vw.gif

Taking the all important datastore management which traditionally involve multiple admin groups and many days for completion of service tickets. UCP Advisor provides an intuitive interface and workflows for VMFS/NFS datastore creation and hides all the creation complexities and validation of FC zoning across multiple SAN switches, checking that WWPN of ESXi host(s) are in active zone and storage host groups, performing storage LUN creation/masking and finally attachment to ESXi cluster into single click operation. Provisioning times are now at least sub 1 minute. With v1.2 release, we now provide full end to end workflow support for iSCSI and NFS datastores as well.

 

But we have taken this a step further and also generate unique vCenter tagging of the storage capabilities of the just created VMFS datastore(s) using associated HDS VASA Provider software (v3.4). Now the characteristics of that datastore (performance, availability, cost, encryption etc. etc.) are tagged and available to vSphere administrators to exploit in vCenter policy based management framework for provisioning operations whether from vCenter or higher level cloud automation. The vCenter tags also enable admins to quickly find all related objects, for example all datastores that match Tier 1 IOPS Performance + provide data at rest encryption. Pretty cool SPBM for VMFS. <Click animated GIF for full visual>

ds-prov.gif

As referenced earlier, UCP Advisor supports vSAN based hyperconverged like UCP HC (updated support in v1.2 for vSAN 6.6), converged infrastructure like UCP 2000 that uses compute and external storage and a mode called Logical UCP which can manage flexible configurations including standalone storage. For vSAN based UCP HC, UCP Advisor provides visibility to health and capacity of the vSAN compute nodes respective SSD/HDD(s)  that form the cache and capacity tiers of vSAN datastores and also visibility to non allocated devices. It also provides access to compute inventory, topology and operations such as boot order, power and LID operations and most importantly firmware management which I'll cover in subsequent blog in this series <Click on animated GIF for full visual>

 

vsan.gif

 

Speaking of ESXi compute nodes, UCP Advisor can also deploy new node(s) or non-allocated ESXi compute nodes into ESXi clusters running on UCP 2000. It will surface up un-allocated compute nodes on UCP 2000 config (which are SAN Boot ESXi nodes), it will check/update the firmware of node(s) match the cluster, verify WWPNs on new host are correctly configured in active SAN zones and after deployment, it will ensure all existing VMFS and NFS datastores in the cluster are now available and presented to the new node(s). Again, this dramatically increases the time to use for new compute resources added into environment and providing the turnaround times now expected in the age of public computing expectations.

<Click on animated GIF for full visual>

 

deploy server.gif

 

In the next part 2 of this series, I will cover aspects of networking mgmt, on-boarding administration, topology views and integrated data protection and more in part 3

I recently published a short video blog Let's hear it - Introduction to UCP Advisor which introduced a new converged and hyper-converged infrastructure automation and delivery software from HDS. Some great feedback but as expected folks asking for more technical details and an opportunity to see the product in action. With that, here is a 20+ min video I put together which walks through the product including compute, storage, network,data protection and advanced infrastructure management capabilities. As mentioned in the previous blog, the intent is to put infrastructure tasks within the reach of the efficient fingertips of administrators to enable them to accelerate and manage the delivery of VM based application services on that dynamic infrastructure.

 

 

Reminder: You can view video on YouTube by selecting icon on the bottom but ensure the quality settings are set to 720P to view it if it starts looking blurry.

 

<updated Video based on version 1.2 released in June 2017, here is link to 1.2 related blog>

Traditional agent-based backup and recovery solutions can dramatically impact the security, performance and total cost of ownership of virtualized environments. As organizations expand their use of virtualization, hyper-converged infrastructure like VMware vSAN, they need to closely examine whether their data protection strategy supports efficient, fast, secure backups that won’t tax storage, network, budget, or computing resources. As data grows, the need for more frequent data protection and a variety of other challenges have forced administrators to look for alternatives to traditional backups.

 

Backup Challenges

Initially, most backup administrators chose to back up virtual machines by deploying backup agents to each individual virtual machine. Ultimately, however, this approach proved to be inefficient at best. As virtual machines proliferated, managing large numbers of backup agents became challenging. Never mind the fact that, at the time, many backup products were licensed on a per-agent basis. Resource contention also became a huge issue since running multiple, parallel virtual machine backups can exert a significant load on a host server and the underlying storage. Traditional backup and recovery strategies are not adequate to deliver the kind of granular recovery demanded by today’s businesses. Point solutions only further complicate matters, by not safeguarding against local or site failures, while increasing licensing, training and management costs.

 

Business benefit and Solution Value Propositions

Hitachi Data Instance Director (HDID) is the solution to protect Hitachi Unified Compute Platform HC V240 (UCP HC V240) in a hyper converged infrastructure. The solution focuses on the VMware vStorage API for Data Protection (VMware VADP) backup option for software-defined storage . Data Instance Director protects a VMware vSphere environment as a 4-node chassis data solution with options for replicating data to outside the chassis.

Hitachi Data Instance Director provides business-defined data protection so you can modernize, simplify and unify your operational recovery, disaster recovery, and long-tern retention operations. HDID provides storage-based protection of the VMware vSphere environment.

 

Data Instance Director with VMware vStorage API for Data Protection provides the following:

 

  • Agentless backup using the VMware native API
  • Incremental backup that provides backup window reduction
  • Easy to implement and maintain for a virtualization environment
  • Easy to replicate backup data to other destinations or outside of chassis

 

Logical Design

Figure shows the high-level infrastructure for this solution

 

Below are the Use cases and results

 

Use Case

Objective

Test Result

Use Case 1 — Measure the backup-window and storage usage for the VMware VADP backup using Hitachi Data Instance Director on a VMware vSAN datastore.

Deploy the eight virtual machine's DB VMDK evenly on two VMware ESXi hosts with VMware vSAN datastores. The workload runs for 36 hours during the backup test. Take the measurement with both quiesce options enabled/disabled. This backup is a full backup, with initial backup and a later incremental backup.

Initial Full backup

Backup time : 52 Min

Storage used : 1920 GB

 

Incremental Backup with Quiesce ON

Backup time : 4 Min 15 Sec

Storage used : 35.02 GB

 

Incremental Backup with Quiesce OFF

Backup time : 2 Min 25 Sec

Storage used : 34.9 GB

Use Case 2 — Create a cloned virtual machine from the Hitachi Data Instance Director backup

Restore a virtual machine after taking a Hitachi Data Instance Director backup. Measure the timestamp of the restore operation.

Restore backup with HDID

Restore time : 22 Min 15 Sec

Storage used : 213 GB

 

Conclusion

With Hitachi Data Instance Director, you can achieve broader data protection options on the VMware virtualized environment. With VMware VADP CBT, the backup window for the incremental backup was relatively short and optimized.

 

  • Eliminate multi-hour backups without affecting performance
  • Simplifies complex workflows by reducing operational and capital costs with automated copy data management
  • Consolidate your data protection and storage tasks
  • One-stop data protection and management

 

Sincere thanks to  Jose Perez , Jeff Chen, Hossein Heidarian, Michael Nakamura for their vital contribution and making this tech note possible.

 

Please Click Here to get tech note Protect Hitachi Unified Compute Platform HC with VMware vSphere and Hitachi Data Instance Director