Skip navigation
1 2 3 4 Previous Next

Hitachi Solutions for VMware

60 posts

Authors: Chetan Gabhane, Amol Bhoite

 

IT teams are struggling as always to meet their backup and recovery objectives; minimize downtime and data loss; ensure that data is always recoverable; and overcome similar pervasive data protection challenges. Often, the inability to adequately protect data or services can delay or prevent deployment of critical applications or services. In a recent trend, businesses report that data protection and the increased use of server virtualization continue to be high-priority IT initiatives.

In the world of digital transformation and distributed environments, data requires tracking and protection to keep up with desired availability as well as security and compliance standards. However, the needs for data protection have changed with the modern digital business as faster and more reliable restoration become top priorities for every business.

 

The answer to these problems is the Hitachi solution for data protection using Hitachi Data Instance Director (HDID) to protect a hyper converged infrastructure using VMware vSAN.

 

HDID provides business-defined data protection so you can modernize, simplify, and unify your operational recovery, disaster recovery, and long-tern retention operations. Data Instance Director provides storage-based protection of the VMware vSphere environment. Data Instance Director provides a modern, holistic approach to data protection, recovery, and retention. It has a unique workflow-based policy engine, presented in a whiteboard-style user interface that helps map copy-data management processes to your business priorities. It includes a wide range of fully integrated hardware storage-based and host-based incremental-forever data capture capabilities that can be combined into complex work flows to automate and simplify copy-data management.

 

Overview

Hitachi Data Instance Director protects hyper-converged infrastructure. This solution uses VMware vStorage API for Data Protection (VMware VADP), a backup option for software-defined storage.

 

In deploying this solution, workload VMs are created on a VMware vSAN datastore. Create the Master virtual machine and Repository virtual machine for Data Instance Director on another VMware vCenter environment.

As part of Data Instance Director backup activity, you back up the workload virtual machines on a repository virtual machine using VMware VADP, a VMware vStorage API that backs up and restores vSphere virtual machines.

 

Hitachi Data Instance Director provides business-defined data protection so you can modernize, simplify, and unify your operational recovery, disaster recovery, and long-term retention operations. Data Instance Director provides protection of the VMware vSphere environment.

 

Solution components

HDID for hyperconverged solution using VMware vStorage API (VADP Backup)

VMware vSphere with VMware vSAN and other VMware products Solution Architecture

Below is the storage architecture used for the testing.

 

 

 

Below is the HDID UI including the node information for this test scenario.

 

 

Below is the HDID workflow for the policy.

 

 

Test Results

Below are the HDID Master and HDID repository behavior which is installed and configured on windows server 2012 VM (HA, vMotion, FT). Below are test scenarios for HDID on hyper converged infrastructure using VMware vSAN.

 

HDID Master and HDID repository appliance (HA, vMotion, FT) Test scenarios

Test Case

Test VM

vSphere features for HDID appliance

Test steps

Results

1

HDID Master VM HA test

vsphere HA, vMotion Enabled

Start incremental backup from HDID GUI for workload VMs

Backup started

Immediately shutdown the host deployed with HDID Master VM

Back up of workload VMs continues as HDID VM is migrated to another host as part of VMware HA

2

HDID Master VM Migrate (vMotion)

vsphere HA, vMotion Enabled

Start incremental backup from HDID GUI for workload VMs

Backup started

Migrate HDID Master VM from one host to another host

Back up of workload VMs continues as the HDID Master VM is migrated to another host using vMotion

3

HDID Repository VM Migrate (vMotion)

vsphere HA, vMotion Enabled

Start incremental backup from HDID GUI for workload VMs

Backup started

Migrate HDID Master VM

Back up continues as the HDID VM is migrated to another host using vMotion

4

HDID Repository VM HA test

vsphere HA, vMotion Enabled

Start incremental backup from HDID GUI for workload VMs

Backup started

Immediately shutdown host deployed with HDID Repository VM


1. Back up sticks at the point where the host is powered off though HDID Repository gets migrated to another host and powered on automatically
2. After waiting for approx. 15 minutes the backup fails due to resynchronization of vSAN-Cluster' to HDID Repository which is an expected behavior

3. New backup will start on the scheduled time as per HDID policy.

 

 

5

HDID Master VM FT enabled

vsphere HA, vMotion, FT Enabled for HDID Master VM

1. Enable Fault tolerance for HDID Master VM (FT creates secondary HDID Master VM and power-on when primary goes power off)
2.  Start incremental backup from HDID GUI for workload VMs

3. Backup started
4. Immediately power off the host with HDID Master VM


After immediate power off of host, the HDID Master VM is migrated to another host; backup of workload VMs continues  

6

HDID Repository VM FT enabled

vsphere HA, vMotion, FT Enabled for HDID Repository VM

1. Enable Fault tolerance for HDID Repository VM (FT creates secondary HDID Repository VM and power on when primary goes power off)
2.  Start incremental backup from HDID GUI for workload VMs

3. Backup started
4. Immediately power off the host with the HDID Repository VM


After immediate power off of host, the HDID Repository VM is migrated to another host; backup of workload VMs continues

 

Note: HDID backup and restore performance result is between 840Mbps to 876Mbps, i.e. more than 100 MBps

 

“HDID with vSAN Certification” has been completed successfully with VMware vSAN READY logo and published on VMware VSX marketplace. Please find below links for your reference:

 

Publication links:

Click here to get HDID on Hitachi UCP HC with VMware vSAN Data Protection Reference Architecture link

This solution has been certified with VMware vSAN Ready node with Hitachi Advanced Servers DS120.

Click here to get VMware KB article link

Click here to get VMware VSX Marketplace link

Ready to leverage the power of data within your organization? Hitachi Vantara and Cisco offer the technology leadership you need to manage, share and access data — quickly, flexibly and securely.

 

By Sean Siegmund - Applications and Converged Technical Advisor and Tim Darnell - Product Owner - UCP CI

 

Nearly every company recognizes the need to manage and leverage the enormous volume of data that’s available today. Real-time insights about customers, competitors, trading partners, industry trends and other strategic topics have the power to shape the success of the entire business.

 

However — as Nick Winkworth discussed in his blog, “Unlocking Your Data” — most companies find themselves with an outdated IT infrastructure and a set of disparate legacy systems that aren’t ready to handle the size and scale of today’s data management challenge.

 

To address this problem, longtime partners Hitachi Vantara and Cisco have joined forces to launch an innovative solution called Cisco Validated Design (CVD). Companies can leverage CVD to quickly and cost-effectively achieve the speed, scale, flexibility, and security they need to capitalize on their data assets.

 

Thousands of customers have already established their IT landscapes by using Cisco’s computing and networking solutions, along with Hitachi’s enterprise-class block storage technologies. Now, with Cisco Validated Design, customers can take advantage of the Cisco and Hitachi Adaptive Solutions for Converged Infrastructure (Adaptive Solutions for CI) — which enables them to deploy industry-leading Cisco and Hitachi technologies as a private-cloud converged infrastructure.

 

Adaptive Solutions for CI is a robust and scalable architecture, leveraging the technology strengths of both Cisco and Hitachi Vantara. It includes the following components, each of which represents a best-in-class solution from an established industry leader:

 

 

By bringing these leading technologies together in a converged infrastructure, Cisco Validated Design offers three key advantages: speed, flexibility, and security.

 

Speed: Accelerate Your Data Results

 

Automation, artificial intelligence, machine learning, and other advanced technologies have not only increased the availability of real-time data — but they have also increased the strategic value of that information. Companies can now sense and respond to changes quickly before they can disrupt the business. But that means having uninterrupted, 24-7 access to real-time data.

 

While most people think of speed in terms of network or SAN connections, real speed needs to occur across the scope of the technology implementation — and throughout its entire lifecycle, beginning with initial deployment. With Cisco Validated Design, both Hitachi and Cisco have invested heavily in validating designs and technologies that support speed. Customers benefit from best practices, performance tuning, and interoperability within the CVD platform.

 

As the CVD solution comes online, storage and network connections are designed to enable 16 to 32 GB FC line rates, with anywhere from 1 to 100 Gigabit FCoE speed. Network connections are also reduced by utilizing the Cisco UCS Fabric extenders (FEX) or I/O Modules (IOMs), which multiplexes and forwards all traffic from servers in a blade server chassis to a pair of Cisco UCS Fabric Interconnects.

 

Setting up the network also becomes a breeze, as the use of the Cisco UCS Virtual Interface Card (VIC) 1400 Series extends the network fabric directly to both servers and virtual machines so that a single connectivity mechanism can be used to connect both sets of workloads (FC and FCoE), with the same level of visibility and control.

 

And, with Hitachi storage, all systems scale to meet the demands of IT organizations’ ever-increasing workloads. Depending on their specific needs, customers can select from mid-range storage capacity (600k IOPS to 2.4M IOPS) or enterprise-class storage (4.8M IOPS).

Flexibility: Become More Agile and Responsive

 

If the modern business world has taught us anything, it’s that both technology and data are moving targets. The increasing adoption of artificial intelligence and other innovations means that IT infrastructures must be built with flexibility and agility in mind. Cisco and Hitachi have perfected their hardware to address this need.

 

The Cisco Unified Computing System™ (Cisco UCS) is a next-generation data center platform that integrates computing, networking, storage access, and virtualization resources into a cohesive system designed to reduce not only the total cost of ownership but also increase business agility. 

From a system management perspective, the Cisco UCS Fabric Interconnects (FIs) provide a single point for connectivity and management for the entire Cisco UCS system. Typically deployed as an active-active pair, the system’s fabric interconnects, integrate all components into a single, highly available management domain controlled by the Cisco UCS Manager. Available in Gen 3 and Gen 4 configurations, the fabric interconnects, Cisco Nexus 9000 series switch and the Cisco MDS 9000 switch family are tailored based on port count needs and storage FC connection rates. 

 

Also, the Hitachi Virtual Storage Platform G1x00 (VSP G1x00) and the all-flash Hitachi Virtual Storage Platform F1500 (VSP F1500) unified storage systems provide high performance, high availability, and reliability for always-on, enterprise-class data centers. Based on Hitachi's industry-leading storage technology, the Hitachi Virtual Storage Platform G350, G370, G700, G900 — along with the all-flash Hitachi Virtual Storage Platform F350, F370, F700, and F900 — include a range of versatile, high-performance storage systems that deliver flash-accelerated scalability, simplified management, and advanced data protection.

 

As with all Hitachi solutions, upgrades are easy to implement as conditions change. It is cost-effective and straightforward to increase storage capacity and add new user licenses.

 

Security: Protect Your Critical Data Assets

 

No one understands the strategic importance of your data more than Hitachi Vantara. Hitachi storage systems are designed with reliable data protection and security in mind. Not only is every storage system in CVD completely secure, but it is backed by Hitachi’s 100 percent data availability guarantee. If a Hitachi storage system ever fails to deliver availability, it is replaced free of charge.

 

With a mix of different drives (HDD, SDD, FMD) and parity group options available, Hitachi storage systems can be configured to meet each customer’s unique needs — but these solutions always follow strict security protocols for even the most stringent industries, including banking and health care.

 

Hitachi Storage Virtualization Operating System (SVOS) RF is the latest version of SVOS. Flash performance is optimized with a patented flash-aware I/O stack, which accelerates data access. Adaptive inline data reduction increases storage efficiency while enabling a balance of data efficiency and application performance. Industry-leading storage virtualization allows SVOS RF to use third-party all-flash and hybrid arrays as storage capacity, consolidating resources for a higher return on investment and providing a high-speed front-end to slower, less predictable arrays.

 

SVOS RF is just another feature added to flash enabled systems that are in addition to an already impressive set of storage-efficient options like Hitachi Dynamic Provisioning and Hitachi Dynamic Tiering.

Start Increasing Your Data Performance Today

 

Cisco Validated Design represents an advanced solution that is within reach of every company, no matter how outdated the current technology environment. With this solution, Hitachi and Cisco have carefully designed a converged infrastructure platform, which partners two technology companies rated as leaders in their respective domains. This best-of-breed converged infrastructure solution makes it simple and straightforward for organizations to modernize their IT landscape and begin leveraging the full power of their data assets.

Visit Cisco/Hitachi adaptive solutions for CI design guide to learn more about the advanced technologies behind Cisco Validated Design. Please see the following link for more detailed information. 

There are already a number of interesting contributions available in this community looking at various aspects of this topic. This blog is intended to give an overview of these different aspects and to assist with navigation to more detailed content.

 

Managing the complexity and total cost of ownership in the Data Center is a challenge for any large enterprise. Besides hardware consolidation (using server and storage virtualization), the standardization of methodologies and tools is one way to achieve this, due to more simplified license and version management.

 

Data protection and the efficient use of capacity is a common requirement of workloads across the DC. Storage based replication is a well understood method to provide HA and DR for many mission critical applications and it helps to reduce the need to implement unique, solution specific, methodologies for different workloads. The VSP product family offers snapshot, cloning and replication capabilities to support these requirements.

 

The storage functionality on its own is a good first step, however a common tool to manage the functionality on context of the application adds even more value, as the solution profile for HDID describes:

 

Data protection is complicated. Each application and data source has its own unique method for making backup copies. Do you use separate solutions for physical and virtual machines, and for Oracle and SAP HANA platforms, and for remote replication?

 

HDID is well integrated into the Hitachi Vantara product portfolio and allows customers to benefit from a common tool to manage Backup/Recovery for many common applications.

The functionality of HDID goes well beyond this basic Backup/Recovery also helping to address requirements like Data Governance. However this kind of “Enterprise Copy Data Management” is beyond this blog and covered in depth by Rich Vining.

 

Now let's have a closer look at VMware integration.

 

Hitachi Vantara for several years provides a set of adapters that simplifies the integration of VSP storage functionality into the VMware ecosystem.

 

One of these adapters is responsible for the integration with VMware’s Site Recovery Manager (SRM), which manages replication between two sites and allows failover of VMs and datastores between sites all managed from vCenter. SRM can use its own software replication but can also take advantage of array-based replication if the datastores are hosted on capable hardware such as the Hitachi arrays.

 

Managing the replication setup and policies manually requires significant knowledge.

With the recent release of HDID 6.7, new functionality was introduced for VMware and also integrated into the VMware Site Recover Adapter (SRA). So now HDID can be used to manage storage replication configuration and policies.

SRA.jpg

You can get more details about the HDID integration with VMware Site Recovery Manager and vRealize Orchestrator as well as other updates for the VMware adapters in Paul's blog.

 

Here some details for the second major integration topic.

 

vRealize Orchestrator is a VMware tool that allows tasks to be automated using workflows. Hitachi Vantara has produced workflows that automate some HDID tasks such as backing up or restoring a VM. These workflows can be invoked from vSphere allowing a user to invoke HDID tasks straight from the VMware interface.

 

Here a short illustration how the combination of HDID, VMware and VSP storage support a work split between“Backup Admin” and “vSphere Admin” for protecting VMs.

Instant_Protection.jpg

A short introduction to the way how “Backup Admins” can create HDID policies for VMware either using the HDID GUI or API, can be found at John Wesley’s blog.

 

Here, let me introduce Hitachi Unified Computing Platform CI (UCP CI). UCP CI v2.0 have been available beginning in August 2018. This version supports some new hardware like Hitachi Advanced Server DS220. For Virtualization solution, UCP CI v2.0 supports DS120, 220, 240, and All Hitachi VSP Storage series.

 

The new release of version 2.1 updated the UCP CI system in December 2018. This version adds supporting Hitachi Advanced Server DS225 with GPU solution. UCP CI architecture consists of Intel-based rackmount servers, Hitachi Storage and Switches. (Product Overview)

UCP-CI.jpg

Here are two solutions which are provided from Hitachi.

  • HDID provides policy-based replication solution integrated with VMware.
  • UCP CI provide pre-validation configuration and turn-key solution.

They can work together and help owner to reduce configuration and operation cost.

 

 

Today, I introduced the replication solution using HDID and the turnkey-solution with UCP CI. If you are interested in them, please check it out on the Hitachi Vantara website.

 

Thank you for taking time to read this article.

Introduction:

It’s time to modernize your IT infrastructure, and your team is looking to collect the information needed to select the right foundation that fits your strategic business needs. The ideal solution optimizes existing applications and operates CapEx and OpEx efficiently.   Not all your IT systems will transfer to any cloud platform today, but leaving the systems stuck in a traditional IT infrastructure will mean more headaches to come. Converged Infrastructure (CI)and Hyperconverged Infrastructure (HC) solutions have what you need to manage the day-to-day IT operations and allow you to tackle tomorrows challenges efficiently. The decision on your hands will require a balanced approach to support many of the different workloads and applications living today on your data center floor.

With these needs in mind, Architects at Hitachi Vantara have built a set of UCP solutions targeted for hyperconverged and converged solutions.  These infrastructure solutions utilize the same core hardware components and leverage the same management and automation tools. With these solutions, you can now have workloads fit onto the underlying hardware they will require to keep the IT systems highly available while still being able to meet the performance demands of the business.

Converged Versus Hyperconverged

The popularity of both of these technologies’ rests with the fact that both solutions make it easier for administrators to interact with the underlying hardware. The essential difference between CI and HC is how the storage and network components are deployed and utilized. These solutions utilize virtualization, the same compute CPUs and memory, and they both utilize disks in a hybrid (flash and non-flash) or all flash form. Hyperconverged (HC) Infrastructures, however, use local direct attached storage (vSAN), along with software-defined storage and networking. You can manage all resources from a single administration pane; this includes the hypervisor (virtual) and hardware (physical) elements. Hyperconverged Infrastructure solutions operate in a flat architecture, where the compute nodes are handling the bulk of the work including all storage capabilities.

Converged Infrastructure (CI) combines compute, storage and network as a single resource. The elements are pre-defined by the hardware vendor and can also include automation and orchestration management software. The storage sub-system connects to a SAN and supplies storage to the compute infrastructure. When using Hitachi Enterprise storage, memory DIMMs within the storage platform are caching all I/O, and the storage array is capable of 32 Gb FC speeds per port.

The focal point with these two technologies comes down to a few aspects - are your compute resources budgeted to do specific clustered workloads, or do you need infrastructure that will have dedicated components for the network and storage layers? Can the business benefit from a self-service model, or will the mundane tasks of today be benefited by the orchestration of a CI solution?

CI and HC solutions have a various number of trade-offs and benefits, but everyone should evaluate these infrastructure solutions on an organizational and divisional basis when attuning HC servers for a given workload. The workloads running within an HC environment should have the same characteristics to share the HC resources properly. Otherwise, competing workloads can be migrated to another HC cluster. HC will allow end users to properly test and tune developmental tasks without the need for significant interaction with HC infrastructure administrators. HC allows IT departments and stakeholders to deploy on-demand with a tailored HC platform while enabling a common administration framework across all HC solutions implemented.

Conversely, CI solutions embrace many different workloads due to the features of an Enterprise Storage system and other hardware components.  Within a virtualized environment, CI accelerates hardware deployment of new systems and increases performance and resource utilization. With a CI system, you can rest assured that the system is ready to function with no interoperability pitfalls.  CI systems are a set of resources which can be provisioned across multiple divisions while using a qualified hardware platform common for IT resources.

Hitachi has a proven track record of deep expertise in striking the right balance within each of these platforms, cementing again the need to have a partner with expertise in the right fit for your business and not specialized in any particular infrastructure. As part of your digital transformation, do you have a partner skilled in positioning the best overall infrastructure solution to meet your business needs?

Hitachi UCP HC

The Hitachi Unified Compute Platform (UCP) for HC is a set of compute nodes which have been approved by VMware to run as a vSAN ReadyNode configuration. Hitachi utilizes DS120, DS220, and DS225 models as the compute resource for Hitachi UCP HC platform.  Several configurations have certified ranging from HY4 – HY8 to AF4 – AF8 series. With the various contributions, Hitachi has made available a wide range of offerings that will match the specific workload which the platform needs to handle.

A vSphere cluster comprised of vSAN ReadyNodes is closely tied to ESXi versioning and other software related components from VMware. All of the Hitachi UCP HC configurations allow for SSD cache and your choice of a hybrid solution using SATA or an all-flash solution within the capacity tier. Within the UCP HC solution, the use of NVIDIA GPUs is also available, allowing for improved performance of high demand workloads.

In September 2018, the Hitachi UCP HC solution included an important emerging technology, which is sure to be part of the future of all modern-day Datacenters. NVMe is a fast data access protocol, improving performance flash and other non-volatile storage devices. It debuts within the Hitachi UCP HC platform and will be rolled out into all of the Hitachi Storage Product Line starting 2019. For more information about the UCP HC All-NVMe solution, please see the following links.

  • Announcement

            https://www.hitachivantara.com/en-us/news-resources/press-releases/2018/gl180926.html

UCP HC product management blog – Hitachi + VMware vSAN + Intel Optane NVMe = Turbocharged Hyperconverged Infrastructure
https://community.hitachivantara.com/people/DISINGH/blog/2018/09/26/hitachi-vmware-vsan-intel-optane-nvme-turbocharged-hyperconverged-infrastructure

The UCP HC solution grows as needed simply by adding additional nodes to the configuration, where competitors upgrade in blocks of nodes. With software-defined storage and network, there is no need for extra backend storage or resources outside of the UCP HC compute cluster. Hitachi UCP for HC gives teams the simplified infrastructure they need to align with their business needs and sustain important SLA’s. In a vSphere vSAN-based cluster, all of the VMware tools required for storage and network are included to maintain, expand and support critical business applications. For more information and overview of the Hitachi UCP HC platform, please see the following link:

https://www.hitachivantara.com/en-us/products/converged-systems/unified-compute-platform-hc-series.html

Hitachi UCP CI

The Hitachi UCP CI platform is a set of compute nodes, storage systems and network switches combined to present a pre-tested, pre-configured converged infrastructure with VMware vSphere. The solution is delivered as a single rack or in a multi-rack configuration. Hitachi UCP CI utilizes DS120, DS220, DS225, DS240 for compute and VSP Gx00 or VSP Fx00 for storage. UCP CI offers a wide range of element configurations for flexibility and performance regardless of workload requirements. These components can be customized and budgeted to consolidate your current mixed workloads on to a single infrastructure platform. You can grow the platform when needed and optimize the configuration to efficiently use resources to handle the demands of the business today and be ready for tomorrow.

UCP CI integrates management infrastructure, storage systems, and physical switches for network and SAN. As a made-to-order infrastructure, there is a wide range of configuration choices to handle any virtualized workload.  Each compute node has Intel Xeon Skylake processors in Silver, Gold, and Platinum CPU configurations.

The UCP CI platform allows for the use of Hitachi VSP G (Flash Optimized) and Hitachi VSP F (All-Flash) disk storage systems. Based on Single Rack or Multi Rack designs, the choice of the following  storage systems can be selected:

  • VSP G200
  • VSP G/F350
  • VSP G/F370
  • VSP G/F400
  • VSP G/F600
  • VSP G/F700
  • VSP G/F900
  • VSP G/F1000
  • VSP G/F1500

UCP CI supports using the same node type across the solution, or an intermix of DS120, DS220, DS225 and DS240 nodes along with the many different Hitachi Storage Product offerings. The UCP CI infrastructure is an agile environment where Hitachi customers benefit from placing many different IT solutions within the same UCP CI. With our latest release of UCP CI, Hitachi Engineering has developed a seamless integration of HC nodes into a standard UCP CI platform. The incorporation of these solutions means you can now utilize UCP CI-based nodes for your mixed workload environments and run HC vSAN-based clusters side-by-side – all in one single Hitachi platform. For more information and overview of the Hitachi UCP CI platform, please see the following link:

https://www.hitachivantara.com/en-us/products/converged-systems/unified-compute-platform-ci-series.html

Hitachi UCP Advisor

UCP Advisor management software from Hitachi supports both the UCP CI and UCP HC solutions. The Hitachi UCP Advisor provides detailed information about the infrastructure components and allows you to manage operations for connected devices. The UCP Advisor software is deeply integrated with VMware leveraging vCenter server and VCSA. It offers exceptional value for both platforms and the administrators who maintain and deploy them.

UCP Advisor simplifies infrastructure operations. Seamless integration allows automated provisioning of the UCP systems for both converged and hyperconverged infrastructure. It[CG1] provides unified management, central oversight, and smart life-cycle management for firmware upgrades, element visibility, and troubleshooting.

Benefits of UCP CI and UCP HC

With the combination of these two solutions and their flexibilities, you will see the Hitachi Converged strategy as a foundation for many of the practical challenges facing our customers today. IT departments are having to manage resources with an ever-growing amount of scrutiny. New technologies are continuing to come to market at a faster pace. Today more than ever you need a strategic infrastructure solution which can cover all of your stakeholder's needs. While each department within your organization may have different needs, does it make sense for each department to purchase vendor defined hyperconverged and converged platforms which your company will need more specialized skills to support?

Infrastructure solutions like UCP CI and UCP HC are transformative architectures when compared to traditional infrastructures. However, this transformation does not need to be disruptive to every application in your domain. Starting early to identify which IT systems have the best chance to succeed in a self-service hyperconverged model, or within a converged infrastructure where workloads can be accurately evaluated to graduate into a hyperconverged tier.  You now have a platform from Hitachi that will support both technologies without harmful waste of assets or IT spend.

Now all parts of the IT systems can see some advancement with the available architectures, and the blending of UCP HC with UCP CI can allow for a phased approach to your every changing landscape.  As your partner in your transformation, Hitachi is the best choice to transform your IT platform and will partner with you through all phases of your transformation efforts.

 

Conclusion

Hitachi Vantara has played an essential role in the IT systems of the past and will indefinitely for the future. With over 100 years of experience in the “Hitachi Way,” aren’t you interested in partnering with a company whose engineering spirit is alive and thriving in many of the Fortune 500 companies today? As early adopters of both technologies, the Hitachi roadmap is all about building value for continued customer satisfaction.  Partner with Hitachi Vantara and get your infrastructure solution ready for tomorrows challenges.

achievement-3408115_640.jpg

 

Check out the latest blog that CTO Paul Lewis shared as we gear up for Hitachi Solutions for VMware's #VMworld 2018.

 

I Call Dibs on MODERNIZE IT

Hitachi has delivered a software, which is Hitachi Infrastructure Analytics Advisor (HIAA).  HIAA delivers visualization, intelligence and automation to optimize infrastructure health while quickly identifying and troubleshooting performance issues.

In my first post, I introduced a use case to determine and solve performance bottleneck in Hitachi Unified Computing Platform CI (UCP CI) environment.

(Reference: https://community.hds.com/community/products-and-solutions/vmware/blog/2017/10/11/end-to-end-infrastructure-performance-analysis-of-hitachi-unified-compute-platform-ci-simplified-with-hitachi-infrastructure-analytics-advisor-hiaa)

 

Hitachi Infrastructure Analytics Advisor – Hitachi Data Center Analytics SaaS edition (HIAA – HDCA SaaS edition)

Now HIAA – HDCA SaaS edition is available to you. This is a new consumption model of HIAA. Hitachi provides a HIAA instance running on the Public Cloud. But HIAA – HDCA SaaS edition provides same user experience as well as on-premises version of HIAA.

Picture1.jpg1. Ready to Use

HIAA – HDCA SaaS edition relieves installation process. Hitachi provides the configured HIAA server instance on Public Cloud. Customer can start managing the system using HIAA GUI instantly when user obtain access information. Customers do not have to prepare HIAA server.

    • No Physical/vm-based server required
    • No installation and configuration

 

2. Maintenance Free

HIAA – HDCA SaaS edition is guaranteed its availability. Hitachi observes HIAA instance and keep it alive. Thanks to this service, customers can focus on managing their system.

Hitachi manages:

    • Instance Management
    • Version Management

 

3.  Subscription model (SaaS)

In HIAA – HDCA SaaS edition, subscription price is determined by the amount of performance information customer wants to upload. Thanks to this pricing model, customers can start using this solution in lower cost than the full upfront payment model.

 

Combination of HIAA – HDCA SaaS edition & UCP CI

HIAA – HDCA SaaS edition is better solution for UCP customers those who prefer low-touch infrastructure management. Because customers do not need to care HIAA – HDCA SaaS edition instance. Hitachi provides management service of HIAA – HDCA SaaS edition instance.

 

HIAA – HDCA SaaS edition provides server-less model to customers. This means customers can implement more higher density of user application and VMs.

 

System Overview

This is the system overview. Basic components, which are Probe server and HIAA Server, are same as on-premises version of HIAA. But HIAA server move to Public Cloud. Generally, most of customers protect their system by the firewall.

 

Preliminary

1. Network configuration

a.    The Probe server communicate with the HIAA server via HTTPS protocols. (No other protocols are required.) Please negotiate your network administrators to allow outbound connection HTTPS port 443. For example, changing Firewall setting or though HTTPS proxy.

 

b.    In Public Cloud side, Web Application Firewall (WAF) allow to connect from user network. Hitachi have to list the public IP address of customer network. Please contact Hitachi representative.

 

Picture2.jpg

 

2.    Access information from Hitachi

Hitachi provide HIAA – HDCA SaaS edition access information to customers.

    • HIAA Server information to upload performance information
    • HIAA GUI access information to login console

 

Example Configuration

I am going to introduce an example configuration, which is HIAA – HDCA SaaS edition manages UCP CI in the on-premises data center.

 

1.    Configuration in on-premises data center

There are same processes to install and configure in the on-premises side. (Reference: https://community.hds.com/community/products-and-solutions/vmware/blog/2017/12/29/part2-end-to-end-infrastructure-performance-analysis-of-hitachi-unified-compute-platform-ci-simplified-with-hitachi-infrastructure-analytics-advisor-hiaa)

 

The only difference point is to switch data upload server into HDCA server which is a part of components in HIAA – HDCA SaaS edition. The probe server in on-premises have to connect HDCA. Once customer login to probe server, then input HDCA server information.

 

Picture3.jpg

After a few hours, configuration and performance data will be appeared in GUI screen.

 

2.    HIAA – HDCA SaaS edition configuration

As I mentioned before, customers can start to use HIAA – HDCA SaaS edition immediately. Here, I recommend making a “Consumer Group”. This is a nice feature to identify pods later.

 

Picture4.png

Use Case

1.    Generally, administrators use vCenter to observe system health. If unexpected workload raise, administrator will notice that noisy neighbor issue happed. In this situation, administrator will check system performance in vCenter performance monitor. For example, CPU, Data Store and so on.

Picture5.png

 

2.    On HIAA GUI dashboard, there are an alert on the storage system.

Picture6.png

 

3.    Breakdown of the storage system, there are alerts on Cache and Parity Groups. Administrator have to find a root cause of higher CPR rate.

Picture7.png

 

4.    At this moment, HIAA provides analytics feature of bottle neck. BasePoint is the origin point that Administrators start to investigate. Now we are going to start from Cache because we need to check why CWP is so high.

Picture8.png

 

5.    HIAA shows suspects of higher workload. What volume has much high workload.

Picture9.png

 

6.    One more approach, HIAA can show how busy Parity Groups works. In this use case, to keep up user’s workload, Administrator decides to install more HDDs to enhance drive performance. Because Parity Group Utilization shows exceeds 80%.  This shows Parity Group 04-03 and 04-04 is busy at this time.

Picture10.png

 

7.    What HDP pool should be added more HDDs? HIAA can show where PG should be installed.

Picture11.pngThis shows that overload PG (04-03) belongs to Parity Group belongs to Pool14.

 

8.    After installing HDDs, Auto Rebalance will start working to distribute workload evenly.  For a while, Auto Rebalance will continue working. After that, the system performance will be boosted. User can experience enhanced performance.

Picture12.png

Conclusions

In this article, I introduced HIAA – HDCA SaaS edition. HIAA – HDCA SaaS edition is working on the public cloud. Hiachi prepare it for customers then the customer can start to use instantly. This provides lower OPEX and CAPEX of performance management to customers. Meanwhile, HIAA – HDCA SaaS edition provides same user experience as well as on-premises model.

Software is often improved by providing version up software. Of cause, Hitachi provides software version-up.

So far, I have used the combination solution of HIAA and UCP CI. UCPandHIAA.jpg

I determined to install HIAA 3.2 instead of 3.1. HIAA provides scripts for no hustle upgrading. Let me show you overview of upgrading version3.1 to 3.2 in my case.

(Please follow the procedure which provided in the User Guide when you perform to upgrade your system.)

Picture1.jpg

Upgrade installers are provided for both Windows and Linux environment. I am going to introduce the workflow for Linux host.

 

0. Overview of upgrading process

There are four steps in upgrading process.

process.jpg

1. Copying the files

You need to obtain the software media. Then, the files should be copied to the host. I copied the files using scp. You can any way to copy the files.

 

Type these command on the laptop.

#scp -r /mount-point/ANALYTICS root@HDCA_host:/root/

#scp -r /mount-point/DCAPROBE root@probe_host:/root/

 

Drawing1.jpg

 

2. Stop Service and backup configuration

Don't forget to backup the settings of your environment. Please refer the users guide how to stop HIAA/HDCA/Probe server services.

To back up HIAA server, backupsystem command is available. This command copy the configuration files. You also need to copy some other files in HDCA server and Probe server. User guides mention which files should be copied.

 

3. Run scripts

Here we go. It is time to run upgrading scripts.

3.1 HIAA/HDCA server

Before running upgrade, you must save the certification file. Please follow the detailed instruction in the user guide.

Then, run the upgrade scripts.

# cd /root/ANALYTICS    (Destination directory which you put files in section1.)

# ./analytics_install.sh VUP

Wait until some message come up on screen.

 

3.2 Probe server

Before running upgrade, you must save the certification file. Please follow the detailed instruction in the user guide.

Then, run the upgrade scripts.

# cd /root/DCAPROBE    (Destination directory which you put files in section1.)

# ./dcaprobe_install.sh VUP

Wait until some message come up on screen

 

4. Post upgrade

After finish the all upgrade process, you can access HIAA website. But before you access, please remember clearing the browser cache.

 

These are all step that I did. If you perform upgrade please read carefully and follow the steps in the user guide.

For your reference, I introduce some links.

Upgrade your Infrastructure Analytics Advisor environment - Hitachi Vantara Knowledge

Infrastructure Analytics Advisor 3.2.0 Documentation Library - Hitachi Vantara Knowledge

 

Thank you for your time. See you next article.

Hi, it’s time for Part2. I will show you what I did in set up procedure and use case.

 

(Part1: https://community.hitachivantara.com/community/products-and-solutions/vmware/blog/2017/10/11/end-to-end-infrastructure-performance-analysis-of-hitachi-unified-compute-platform-ci-simplified-with-hitachi-infrastructure-analytics-advisor-hiaa)

 

Environment Preparation and HIAA installation

I set up the configuration in our Lab. I am going to write some workflow about HIAA with UCP CI particularly.

 

Picture1.jpgFig1: Logical Configuration

 

UCP CI Preparation

 

  • Create a LDEV as Command Device on VSP G600

HIAA Probe communicate with VSP G600 Command Device to retrieve storage information including configuration and Performance statistics. This requires FibreChannel connection between VSP G600 and the server running HIAA Probe server VM.

 

  • Install Brocade Network Advisor

HIAA Probe cannot communicate with Brocade SAN switch directly. If you would like to check Brocade Switch performance, you need Brocade Network Advisor (BNA) to retrieve information.

 

HIAA Installation & Initial setup

There are two options to install HIAA. One is deployment of OVA virtual machine image.  Another option is using installer on a host.

 

My choice was first one (OVA file deployment). I created three VMs as Fig1. VM1 is for HDCA/HIAA server. VM2 is for Probe server. VM3 is VM to run windows. This is the host OS to install BNA.

 

Further detail HIAA installation instruction is shown below.

https://knowledge.hds.com/Documents/Management_Software/Infrastructure_Analytics_Advisor/3.1.0/Install_Infrastructure_Analytics_Advisor/Installing_HIAA_in_an_OVA

 

After installing HIAA, I added target probe. Probe is a kind of module to retrieve information from target machines. Probe for Hitachi Storage, BNA, vCenter are available.

https://knowledge.hds.com/Documents/Management_Software/Infrastructure_Analytics_Advisor/3.1.0/Add_probes_to_Analytics_probe_server

 

If you don’t see any information on HIAA without any error, I recommend you wait an hour. It may take time to hand over. Because retrieved information is hand over across three servers (Probe – HDCA – HIAA).

 

Grouping of components, “Consumer”

At the time of registering components into HIAA, HIAA recognize them as just individual resources. “Consumer” is a great feature of grouping resources. I made a Consumer which groups UCP CI resources. This provides administrator clear recognition of UCP CI converged system.

 

Screen Shot 2017-12-28 at 3.42.10 PM.jpgFig2: Inventory List (not yet grouping)

 

Screen Shot 2017-12-28 at 3.41.54 PM.jpgFig3: Create Consumer screen

 

Use Case

I will show you an example of performance analytics use case for UCP CI with HIAA.

 

1. Finding performance Preblem in vCenter

In most cases, an administrator of virtualized environment manages the infrastructure with vCenter. Let's say the administrator found excessive high latency for storage (fc2) reported on the vCenter. (Fig4)

4.jpg

Fig4. Storage view in vCenter (Latency)

 

2. Problem Analytics using HIAA

Hand over to HIAA from this step. Login HIAA and check Dashboard.

5.jpg

Fig5. HIAA Dashboard

 

Alert "Critical" comes up in Dashboard. Then, Jump into E2E view.

6.jpg

Fig6. E2E View

 

This E2E view shows relationship of VM to LDEV. In this screen, HIAA shows Storage Processor and

LDEV are busy. HIAA E2E view shows VM and Host server is fine. These VMs use LDEV and MP in

G600 which is marked as Critical.

Let's back to vCenter, then open VM Monitor tab. We can see Disk performance mounted on VM.

Disk of VM performance was actually dropped(For example, VM fc-02). Meanwhile, example other

VM fc-18 started Write IO to its disk. I would like to improve performance, but also want to keep

both IO fc-02 and fc-18.

7.jpgFig7. Disk of VM fc-02 performance (from VM performance view)

 

8.jpg

Fig8. Disk of VM fc-18 performance (from VM performance view)

 

Let’s drill down to find bottleneck.

 

3. Bottleneck investigation

First, I checked Storage performance. From screen below(Fig9), only MPU-10 is busy. This issue must be occurred uneven assignment of MPUnit. Workload of MPU-10 is above critical line (Red line). The other MPU do not work now. In this example, MPU-10 is primary bottleneck.

9.jpg

Fig9. Sparkling view

 

4. Performance improvement

Next step, what we can do to solve overload situation in MPU-10? Primary option is offloading workload to other MPUs.

I have to change MPU assignment configuration. This operation makes workloads distributed to other MPU.

Then, we need to identify what LDEV should be moved? Candidate LDEV are shown as below. (Fig10) Click busy MP, HIAA shows LDEVs related with MPU.

10.jpg

Fig10. Relationship LDEV with MPU

 

I distributed assignment of MP Unit across all MPUnit. Then all MPU started works evenly. (Fig11)

11.jpg

Fig11. After resolving MPU-10 overload

 

Finally, MP overload was resolved.

12.jpg

Fig12. All resources are fine

 

Conclusion

I introduced the value of combination HIAA and UCP CI. In the use case section, I showed you one of

the examples to improve performance issue in UCP CI environment.

I hope you can enjoy HIAA and UCP CI solution. Thank you for your time to read.

 

Please refer further information below,

HIAA:

https://www.hitachivantara.com/en-us/pdf/solution-profile/hitachi-solution-profile-it-analytics.pdf

 

Videos on YouTube:

"Detecting Performance Bottlenecks using E2E view in Hitachi Infrastructure Analytics Advisor"

https://youtu.be/LkDoO3MA1x4

 

"Dynamic Threshold Storage Resource Monitoring With Performance Analytics, Using HIAA"

https://youtu.be/9WlpUx8inNA

 

"Using HIAA to Analyze a Performance Bottleneck in Shared Infrastructure"

https://youtu.be/fGFj7lLiYX4

 

"Detecting Performance Bottlenecks Using Sparkline View"

https://youtu.be/VTezCGUniR8

 

UCP CI:

https://www.hitachivantara.com/en-us/pdf/datasheet/hitachi-datasheet-unified-compute-platform-ci.pdf

Hitachi Vantara has launched the new converged infrastructure Hitachi Unified Computing Platform CI (UCP CI). Today, I would like to introduce the performance analysis solution with UCP CI.

 

Hitachi Infrastructure Analytics Advisor (HIAA) delivers visualization, intelligence and automation to optimize infrastructure health while quickly identifying and troubleshooting performance issues. UCP CI is an optimized and scalable converged infrastructure platform. In this series of posts, we will cover use cases of what can be done with HIAA and UCP CI together.

 

Fig1 shows an example of an end-to-end (E2E) map, which is showing topology of specific running VM to connected switch to used storage LUN.

 

Picture1.jpg

Fig1: HIAA E2E View

 

In this series of posts, we will cover:

  • Introducing the combined solution of HIAA & UCP CI
  • Installation & Configuration
  • Introducing use cases

 

Hitachi Infrastructure Analytics Advisor (HIAA)

Hitachi Infrastructure Analytics Advisor (HIAA) includes the tools to properly monitor and analyze performance statistics from the application through its entire data path to the shared storage resources. Generally, Converged Infrastructure, like UCP CI, provides easy management to customers. Meanwhile Converged Infrastructure conceals detailed of infrastructure. This makes troubleshooting difficult.

 

The features of HIAA provide solutions against these pain-points.

 

Some of the key features include:

  • Monitoring Switch, OS, Hypervisor
  • E2E Topology mapping
  • Performance comparison and related changes
  • Identify the bottleneck and root cause analysis

 

More information:

https://www.hitachivantara.com/en-us/pdf/solution-profile/hitachi-solution-profile-it-analytics.pdf

 

Also, the HIAA team has posted great videos on YouTube. Check them out!

 

Detecting Performance Bottlenecks using E2E view in Hitachi Infrastructure Analytics Advisor

https://youtu.be/LkDoO3MA1x4

 

Dynamic Threshold Storage Resource Monitoring With Performance Analytics, Using HIAA

https://youtu.be/9WlpUx8inNA

 

Using HIAA to Analyze a Performance Bottleneck in Shared Infrastructure

https://youtu.be/fGFj7lLiYX4

 

Detecting Performance Bottlenecks Using Sparkline View

https://youtu.be/VTezCGUniR8

 

Analyze Configuration Changes in Your Infrastructure to Solve Performance Problems

https://youtu.be/NzMhSeLdOQ8

 

(Updated on 10/19) HIAA v3.2 is now available. HIAA v3.2 supports integration with Hitachi Storage Management Pack for VMware vRealize Operations(vROPS) v1.7. Thanks to this integration, vROPS retrieves storage performance, capacity and related health metrics from HIAA. Note, Hitachi Tuning Manager is no longer supported and management pack is available from VMware marketplace.

 

 

Hitachi Unified Computing Platform CI Series (UCP CI)

Hitachi Vantara has launched UCP CI in September 2017.  This is a new series of the Converged infrastructure of Hitachi. The UCP CI architecture consists of Intel-based rackmount servers, Hitachi Storage and Switches.

 

UCP CI Components Overview:

  • Hitachi Advanced Server DS120
  • Brocade G620 SAN Switch,
  • Hitachi Virtual Storage Platform(VSP) Hybrid and all-flash arrays. (G/F1500, G/Fx00)

 

More information:

https://www.hitachivantara.com/en-us/pdf/datasheet/hitachi-datasheet-unified-compute-platform-ci.pdf

 

Combination of HIAA & UCP CI

The combination of HIAA with UCP CI provides many benefits to customers running UCP CI virtualized environment.

UCP Advisor is the management software sold with UCP CI that simplifies configuration and management of the UCP CI converged infrastructure.

 

HIAA provides an additional value for customers with the ability to monitor, analyze and troubleshoot system performance issues by showing an end-to-end topology and system-wide relationship of hardware and software components.

 

In addition, HIAA can show detailed performance statistic information of the entire UCP CI stack ranging from storage, SAN and hypervisor (VMware).

 

The dashboards and charts are extremely helpful for absorbing large amounts of performance related information in an organized and simplified manner.

Picture2.jpg

Configuration

This is an overview of the configuration built in our Solution Lab.

 

UCP CI

  • Hitachi Advanced Server DS120
  • Brocade G620 (via Brocade Network Advisor(BNA))
  • Hitachi VSP G600 (SVOS 83-04-23-40/01)
  • VMware vSphere 6.x, vCenter 6.x

 

Extra Software

  • Hitachi Performance Analytics 3.0 (HIAA 3.1 and HDCA 8.1)
  • Brocade Network Advisor (BNA) 14.0.1 (To observe SAN Switch performance, BNA is required.)

 

Picture4.jpg

Fig2: Configuration Overview

 

Free Trial License available

We can provide a free version of HIAA for customer trial. There is not functional limitation but it expires 90 days of installation. If you are interested in the trial license, please contact HIAA PM D-List or the author (Koji Watanabe).

 

Also, you can obtain the 120 days trial version of BNA from Brocade Website.

 

What's coming up next...

Today, I have introduced the value of combination of HIAA and UCP CI. These two products provide Low-touch infrastructure and easy analysis of performance management.

 

I will show you "Performance troubleshooting using HIAA" in the second post of this series. Stay tuned!

(Part2: End-to-End infrastructure performance analysis of Hitachi Unified Compute Platform CI simplified with Hitachi Infrastructure Analytics Advisor (HIAA) )

Last week I was out in Las Vegas at VMworld 2017 - An incredible event for both VMware and for us at Hitachi! At a high level VMware clearly demonstrated that not only is Private Cloud is accelerating but Hybrid Cloud is now a reality and the future rests on cross-cloud services tied to network and security virtualization.

 

Beyond the hype (after all this is Las Vegas...) its clear that both the Private and Public Cloud are maturing quite quickly and that enterprise clients are looking to accelerate from Strategy to Execution.  While some initial thoughts around the cloud centered around cost savings its clear today that the real gains come from the Agility associated with Private/Hybrid Cloud. Being able to "Run any application, in any cloud on any device" provides enterprises the opportunity to build and run their applications across a wide variety of infrastructure, platform and consumption models driving increased flexibility and more rapid innovation. Most importantly it gives enterprises the flexibility to develop applications on a variable cost basis with the flexibility to bring them back in-house should business requirements change.

 

For more thoughts on the VMworld show and my personal reflections on the future please visit "The Clouds are Clearing...VMworld 2017 Reflections and Predictions"

 

I'd also encourage you to read my colleague Bob Madaio's thoughts "A (mostly) Grown-up Take on VMworld"

 

So what does it mean to Hitachi? Well, the maturation of Private and Hybrid Cloud is exciting because it enables us, at Hitachi, to enhance the depth of the relationships with our clients. Specifically, as it relates to VMware and cloud adoption we leveraged the show to demonstrate 3 key offerings:

 

  1. To Accelerate Private Cloud - Hitachi's NEW Unified Compute Platform (UCP) offerings powered by VMware Cloud Foundation and allowing customers to simply deploy their private clouds on VMware Cloud Foundation in either a Hyperconverged or Rack-Scale footprint
  2. To Accelerate Hybrid Cloud Adoption - Hitachi's Data Services vision powered by Hitachi Content Intelligence and Pentaho Analytics offering centralized governance, analytics and compliance across multiple clouds -  If you are interested in better understanding our perspective on Compliance and Governance of Data today, tomorrow and into the future Ild encourage you to read our CTO; Hu Yoshida's blog "New Data Sources and Usage requires New Data Governance"
  3. To Drive a Lower Cost and Lower Risk to End-User Computing - Hitachi's Content Platform allowing for "Smart Home Directories for VDI" lowering the operational cost and risk of virtual desktop infrastructure

 

The vision of cloud agility is finally coming to life and Hitachi is excited to be at the forefront of solutions that accelerate deployment.

Traditional-vs-Contemporary-Banking-Image-HighRes-a5.jpeg

I've recently moved from Horizontal Platforms to Vertical Solutions and I feel it might be a good time to revisit one of my old posts (Can we please stop telling Digital Enterprises to “act like a startup”?) and look at how this applies to one of my core customer segments: Retail banking.  Specifically, let's look at how their business differs from the Fintech startups and how to apply the three Digital Innovation practices (Infrastructure Modernization, Digital Workplace and Business Insight):

 

Practice 1: Infrastructure Modernization - "How can I run my apps more efficiently and deliver innovation faster?"

Retail banking needs to provide a full portfolio of services to its customers and not all of these are profitable.  By comparison, Fintech startups can choose to offer just the profitable services (e.g. Payments).  In order to stay in business these banks are forced to think about their applications in two categories:

  • Run the Bank (Core Banking systems and Mode 1) - These systems are important to the bank's reputation and their customer's experience but the services that run on them are not highly differentiated or very profitable.  These apps are often scale-up, fragile and changes are tightly controlled.  Core IT looks for opportunities to reduce costs by improving efficiency while still ensuring service levels are maintained.
  • Change the Bank (Digital Banking and Mode 2) - This is where LOBs focus their incremental investments.  These systems are focused on delivering new digital experiences to customers that will help the bank compete with the Fintech startups.  The focus here is on innovation and speed of time to market and the apps that run here are designed using modern scale-out web-ready methodologies.  These systems are resilient, auto-scaling and secure by design as they need to be able to face off to an unpredictable set of end user devices, third party providers and external threats.

There are good reasons to ensure strong isolation between these two parts of the bank.  The legacy systems are just not designed for the unpredictable workloads and volume of read/query activity associated with digital banking.  Mode One workloads are typically protected by perimeter security whereas Mode 2 workloads face off to a variety of end user devices, third party systems and external threats - the digital systems will therefore implement micro-segmentation and a variety of techniques to guard against DDOS, for example.

 

But there is another element that is often missed when rethinking the platform to support Bimodal Banking: the Data Integration layer.  Both of these sides of the bank still need to fit into a joined up multi-channel strategy and provide a seamless experience to the customer.  Both sides of the bank will form part of the customer 360 / KYC picture that the bank needs to implement.  Furthermore, the data in Mode 1 systems is often fragmented and these systems need to be insulated from unpredictable workloads and threats and so Mode 2 systems will typically implement a separate caching layer or operational data store.  The Data Bridge between Mode 1 and Mode 2 systems is therefore a key success criteria that will determine how rapidly the bank can deliver new experiences to their customer base.  We therefore see this as a key part of the Digital Innovation Platform.

 

...In the next part of this blog I will look at the next two practices: Digital Workplace and Business Insight

Traditional agent-based backup and recovery solutions can dramatically impact the security, performance and total cost of ownership of virtualized environments. As organizations expand their use of virtualization, hyper-converged infrastructure like VMware vSAN, they need to closely examine whether their data protection strategy supports efficient, fast, secure backups that won’t tax storage, network, budget, or computing resources. As data grows, the need for more frequent data protection and a variety of other challenges have forced administrators to look for alternatives to traditional backups.

 

Backup Challenges

Initially, most backup administrators chose to back up virtual machines by deploying backup agents to each individual virtual machine. Ultimately, however, this approach proved to be inefficient at best. As virtual machines proliferated, managing large numbers of backup agents became challenging. Never mind the fact that, at the time, many backup products were licensed on a per-agent basis. Resource contention also became a huge issue since running multiple, parallel virtual machine backups can exert a significant load on a host server and the underlying storage. Traditional backup and recovery strategies are not adequate to deliver the kind of granular recovery demanded by today’s businesses. Point solutions only further complicate matters, by not safeguarding against local or site failures, while increasing licensing, training and management costs.

 

Business benefit and Solution Value Propositions

Hitachi Data Instance Director (HDID) is the solution to protect Hitachi Unified Compute Platform HC V240 (UCP HC V240) in a hyper converged infrastructure. The solution focuses on the VMware vStorage API for Data Protection (VMware VADP) backup option for software-defined storage . Data Instance Director protects a VMware vSphere environment as a 4-node chassis data solution with options for replicating data to outside the chassis.

Hitachi Data Instance Director provides business-defined data protection so you can modernize, simplify and unify your operational recovery, disaster recovery, and long-tern retention operations. HDID provides storage-based protection of the VMware vSphere environment.

 

Data Instance Director with VMware vStorage API for Data Protection provides the following:

 

  • Agentless backup using the VMware native API
  • Incremental backup that provides backup window reduction
  • Easy to implement and maintain for a virtualization environment
  • Easy to replicate backup data to other destinations or outside of chassis

 

Logical Design

Figure shows the high-level infrastructure for this solution

 

Below are the Use cases and results

 

Use Case

Objective

Test Result

Use Case 1 — Measure the backup-window and storage usage for the VMware VADP backup using Hitachi Data Instance Director on a VMware vSAN datastore.

Deploy the eight virtual machine's DB VMDK evenly on two VMware ESXi hosts with VMware vSAN datastores. The workload runs for 36 hours during the backup test. Take the measurement with both quiesce options enabled/disabled. This backup is a full backup, with initial backup and a later incremental backup.

Initial Full backup

Backup time : 52 Min

Storage used : 1920 GB

 

Incremental Backup with Quiesce ON

Backup time : 4 Min 15 Sec

Storage used : 35.02 GB

 

Incremental Backup with Quiesce OFF

Backup time : 2 Min 25 Sec

Storage used : 34.9 GB

Use Case 2 — Create a cloned virtual machine from the Hitachi Data Instance Director backup

Restore a virtual machine after taking a Hitachi Data Instance Director backup. Measure the timestamp of the restore operation.

Restore backup with HDID

Restore time : 22 Min 15 Sec

Storage used : 213 GB

 

Conclusion

With Hitachi Data Instance Director, you can achieve broader data protection options on the VMware virtualized environment. With VMware VADP CBT, the backup window for the incremental backup was relatively short and optimized.

 

  • Eliminate multi-hour backups without affecting performance
  • Simplifies complex workflows by reducing operational and capital costs with automated copy data management
  • Consolidate your data protection and storage tasks
  • One-stop data protection and management

 

Sincere thanks to  Jose Perez , Jeff Chen, Hossein Heidarian, Michael Nakamura for their vital contribution and making this tech note possible.

 

Please Click Here to get tech note Protect Hitachi Unified Compute Platform HC with VMware vSphere and Hitachi Data Instance Director

Army-duck-dontvolunteer-with-stamp2.2.jpg

You’ll recall from my last blog I volunteered for to give a presentation to an organisation in London and I found myself having signed up to deliver an evening lecture to the Institute of Engineering and Technology on the subject of cloud. I had managed to pull some material together and coerce a colleague into sharing some of the load by applying an equal degree of vagueness in the description!

 

The Event….

So we had a story, I had a willing partner to help share the challenge, we had overcome the anticipation and were as ready as we could be! The Presentation was polished during the day in between client meetings and we headed to the venue for the evening event.

 

The building where the lecture was to be given wasn’t intimidating at all, nor was all the signage hanging in the entrance hall in anticipation.

 

Picture1.pngPicture2.png

 

 

As if the pressure couldn’t have been any greater the venue we were to be using to give our talk was none other than the Alan Turing Lecture Theatre, named after arguably the founding father of modern computing. The registered attendees numbered 100-150, there was to be tea and coffee on arrival followed by a drinks and nibbles reception afterwards with the night concluding around 9PM.

 

Picture3.png

 

We quickly set up, dumped our bags and then headed to the nearest watering hole for a sherbet and lemonade as a steadier in preparation for the event! On our return we kicked off and we introduced on stage by the event organiser. Surprisingly (for me) the audience seemed to be very aware of Cloud technologies and the Cloud field in general, I was therefore sincerely hoping they would be able to get something out of the event.

 

Sylvain and I delivered our presentation which was well received. The audience listened intently and made notes. We covered the HEC value proposition, the key differences in Public and Private Cloud and the fact that our HEC solution offers the public cloud consumption experience of self service and pay per use with the security / latency benefits of retaining IT on premise in a clients data centre. We covered our SLA driven approach to selling, our pricing being more competitive than a Public Cloud alternative and having a holistic solution to address a changing market.

 

Following the presentation, we took some fantastic questions from the audience which were very balanced and somewhat different to what we had heard before due to the diversity of the audience, people were very keen to understand our IoT story as well as our approach to things like machine learning algorithms. The questions would have continued beyond the allowed time however was stopped by the organisers to allow us to retire to the drinks reception.

 

Picture5.png Picture6.png

 

 

The aftermath……

Picture8.png

Now the event was over we could relax and managed to meet many of the members and people from the audience. The feedback was good and they enjoyed the lively debate, some areas of particular interest were what our views were on edge based data analytics and machine learning integration with Cloud IT. I found these discussions to be very enlightening hearing opinions on the industry from outsiders who have a different (and often very well informed) perspective on what we are doing.

 

I managed to team up with a small group including a Dutchman involved in 3D printing of industrial wind turbine blades (who kindly liberated a bottle of wine for us from the main table) and a retired gentleman who was very well read on the subjects of cloud computing following a 60 year career in IT. I avoided the fact that I was born half way through his career but I think I got away with it.

 

In conclusion…..

Although I started this as a “never volunteer for anything” that’s not how I look back on the experience, often we choose to do things squarely inside our comfort zone however its very fulfilling to step outside this now and again. We also tend to stick to the circles socially and professionally of our peers or customers looking to buy what we have to offer. I found it particularly enlightening to hear the opinions of people with a really diverse set of backgrounds which I would never come into contact with ordinarily. So I’d say in conclusion take the time to do things you wouldn’t ordinarily do and hear from people you wouldn’t expect to ordinarily speak to – you’ll be pleased you did.

 

 

Material…

IET Blog of the event

Presentation Slides

 

Neil Lewis

With memories of Sapper Featherstone, British Army - Royal Engineers circa 1946

Allright, this is the technical part, describing how to built the blueprint and what to configure in NSX to make it work like described in the overview. Let's get started, shall we?

 

Getting started

First things first. I have to create a list of requirements in order to master all the challenges such a micro DMZ concept brings. Lets see what we need:

  • NSX installed and ready to be used
    • Integrated with HEC
    • Security groups for Web and DB
    • Virtual wire for DB
    • Edge configured and ready for external traffic
    • DLR (Distributed Logical Router) configured and ready (OSPF, etc...)
    • Security Tags for DB and WEB server
  • Hitachi Enterprise Cloud
    • Linux blueprint / image to use for WEB and DB server
    • Software components to install such as Apache, MySQL, PHP5, etc...
    • Network reservation for on-demand DMZ (routed-on-demand-network) and the DB network (static)

OK - that should be it. I will focus in this part on the NSX config in the blueprint and the designer. Assuming everything else is just fine and had been pre-configured installed by our fine consulting folks. Just like a customer, I am eager to use it - not to install it

Set up the NSX Tags and security policies

OK, I decided to start with the very important and yet super complex NSX integration...

Alright, you got me there, it is actually not that complex to integrate

 

First I created some NSX Security Tags. These can be used to identify VMs and run actions based on the found tags. Also it might be a smart way of dynamically add VMs to security groups in NSX. In order to use them in the HEC blueprint canvas, the Tags need to be pre-existant in NSX.

OK got it, but were do you create these Tags in the first place?

 

Well, this is done in the NSX management in vCenter. To create custom security tags, follow these steps:

  1. Got to the home screen in vCenter and click on Network and Security
  2. In the left hand side menu click on NSX Managers
  3. In the left hand side select your NSX Manager by clicking on it
  4. Click on the Manage tab
  5. Select the Security Tags button in the headline of the Manage tab
  6. 6. Click on the New Security Tag symbol on top left of the table to add a tag.

 

OK, I created the tags "HEC_DB" and "HEC_Web" and am ready for action. These tags are now useable on VMs for advanced processing.

Also, I created two security groups:

  • DbServer
  • WebServer

To create those, go to Networking and Security and click on Service Composer in the left hand side menu.
These security groups are later used to apply the firewall rules onto. The Tags will be used to assign the VMs to their respective security group (DB VM to DbServer, WEB VM to WebServer),  after the VM deployment.

 

Screen Shot 2017-04-04 at 15.55.20.png

This means you are now able to enforce firewall rules to VMs where you might not even know the IP address nor their subnet mask just by putting the VMs in NSX security groups.

Welcome, to the power of the Service Composer in NSX!

 

After the creation of Tags and Groups in NSX

After the security groups have been created we have to set up the rules of engagement, ahem I mean, the rules for communication between the WEB server and the DB server. Since the WEB server is exposed to the internet, we do not want to have him chatty chatting to the DB server as he whishes. Therefore the communication between these two servers (WEB to DB) has to be limited as much as possible in order to keep the security high! These sophisticated firewall rules are set in so called Security Policies.
We can create a new Security Policy by just clicking on the Security Policies tab and selecting the Create Security Policy icon.
Now you can specify rules for interaction between Security Groups on NSX or even from external sources (like the internet) to Security Groups.
In our case, we want the following rules to apply for a secure configuration:

  • WEB Server can access DB sever only to issue MySQL queries using specific MySQL ports
  • The Internet can access the Web Server only by HTTP or HTTPS
  • All other actions from DB to WEB server are blocked
  • All other actions from WEB to DB sever are blocked

Screen Shot 2017-04-04 at 16.07.15.png

 

Voilá: That should be it, now VMs in the DB security group will only allow VMs in the WEB security group access via the MySQL port. All other access is blocked. For the WEB servers, we are even stricter, from the perimeter firewall (aka: the internet), only HTTP and HTTPs will be let through to the WEB server. The only other server outside of the DMZ  the WEB server can reach is the DB server. The communication is only possible via the MySQL ports to initiate DB queries.

 

You might wonder how to enforce all of this without specifying a single subnet or IP address? Well that is solved by the Security Tags. As soon as the VMs are assigned to the right policies in the Service Composer, the rules will be enforced on them, automagically!

 

Create the blueprint

Assuming everything else is just fine and had been configured correctly, we can now start building the actual application. So lets get started with the design, given that I already have created some installable components, so called Application Blueprints, I can start drag and dropping my way to a versatile multi-tier web application.

 

Screen Shot 2017-04-04 at 15.12.59.png

 

I decided to have a DB sever and a WEB server (shocking - isn't it?). In the design canvas I dragged the DB components such as MySQL installation as well as the FST_Industries_DB component on the DB server.

To do this, simply drag and drop the packages onto the VMs. The FST_Industries_DB component is a customising the DB to set up a table space and does make some other minor edits to prepare the DB server for the use of the WEB Server.
After doing that, I dragged Apache, PHP and the FST_Industries_Web component onto the WEB server.

Besides installing all the software assets, the FST_Industries_Web is then creating an on-demand web site which is accessing the DB sever via its full qualified domain name (FQDN). HEC will now install these packages on the specified VMs, it is important to know that all this data is passed on as dynamic variables during the install (IP addresses, domain names, DB names, etc...) Otherwise it would be fairly complex to install anything on demand

 

After the actual service design is done, we need to ensure that the VMs are tagged to auto assign them into the respective security groups in NSX. Therefore you can drag the Tags directly into the canvas.

The Tags are shown in the picture right above each VM, a thin line represents their assignment to each of the VMs

Just drop it somewhere, for the sake of a clean graphic I put it on top of each of the VMs. By clicking on the dragged in security tag, the actual tag value can be assigned. You will see a list of possible NSX security tags, pick HEC_DB for one and WEB_DB for the other - done

 

If you just finished created the Security Tags in NSX, give HEC a moment to pick them up. If they are not showing up after 15 minutes, it might be necessary to re-run the network and security inventory data collection task. You can find it under "Infrastructure -> Compute Resources -> Mouse over vSphere resources -> Data Collection. The Network and Security inventory is the second last entry in the list. Select "Request Now" after creating the tags and wait for its completion. After this they will show up in the design canvas.

Now, the tags need to be formally assigned to each of the VMs. This is done by clicking on the VM in the canvas and selecting the Security tab. In there you will see both tags available, just tick the one which applies:

  • HEC_DB if you selected the DB Server
  • HEC_Web if you selected the WEB server
  • Done!

 

You might wonder why both tags are always displayed in this security settings for the VM. This is because a VM can have multiple security tags - all tags dragged in the canvas will be shown. In our case it is important to make sure to prevent a double select of a tag with a VM, this mite shake up our well thought through security concept (however, it is easy to spot and fix).

Last but not least both VMs need to be placed in a NSX network. For the DB VM, this network ("virtual wire" in NSX slang) needs to be set as an internal and protected network, since possibly other DB servers might run in there as well.

 

Defining the networks to use

For the WEB server, we want to create the DMZ on demand. That means this network is not pre-existent at the time of deployment.

To accomplish this, we need to define two different types of networks in HEC:

 

Do not get over excited by the term "External" in this case, that refers to all networks that are pre-existing before the time of deploying a service. The "Routed" network is different, this one is a pure logical construct which only comes to life at the time of deployment. This will be configured to form smaller networks to than place the newly created VMs into them.

Therefore its configuration might be a bit confusing in the first place. To configure the network profiles in HEC, go to Infrastructure -> Reservations -> Network Profiles and click on New to select either External or Routed.

The External one has to be pre-existing, which means it has to be defined in NSX before it can be added to HEC.

 

This means you have to create a new virtual wire in NSX prior to the selection in HEC.

The Routed one is more difficult, this is why I think it might be worth going over its options quickly. In the form you will see the following fields:

 

Provide a valid name: DMZ_OnDemand

Description: DMZ network, created on demand each time for every deployment

External Network profile: Transport*

Subnet mask: 255.255.192.0**

Range subnet mask: 255.255.255.240***

Base IP: 172.30.50.1

 

OK, here we are in the networking nirvana. What does all this mean. Just let me explain the "*" real quick:

*: The transport network for your DLR. This is configured during NSX setup for external network access. To describe how to do this would be to much detail for this blog post. In our case, it is named "Transport", but you can name it also Bob, Jon, or Fritzifratzi if that works better for your use case

 

 

**: This is the subnet mask, defining how much devices we want to put into the micro DMZs. In this case it is a /18 subnet mask, which gives us "only" 16,382 addresses. You could also go for a /16 which would give you 65,534 or a /14 for a whopping 262,142 addresses. But be careful, all these addresses are pre-calculated by HEC, which can be quite CPU intense if you chose big ranges.

 

***: The subnet mask for the different small network areas. Basically it creates the "micro" networks, based on the given subnet mask (255.255.255.192.0) and uses the /28 subnet mask (255.255.255.240) to create a net with 14 useable addresses.

This means HEC will now go ahead and create as many small subnets as possible using the provided big /18 (255.255.192.0) subnet mask. In my case it will create network chunks looking like this:

  • 172.30.50.1 - 172.30.50.14 (useable addresses)
  • 172.30.50.17 - 172.30.50.30
  • ...
  • 172.30.63.225 - 172.30.63.238
  • 172.30.63.241 - 172.30.63.254

 

Now you might wonder why there are small gaps between these address spaces. That is because only the useable 14 addresses are shown. For example, the first address is 172.30.50.1, the network address would be 172.30.50.0 and the broadcast address would be 172.30.50.15. So the entire network is actually 172.30.50.0 - 172.30.50.15. But given how networks work the network address and the broadcast address can't be used for servers, leaving a total of 14 addresses useable. It is important to understand that principle in order to make the networks chunks big enough for the amount of servers to be in them.

 

If all this network calculations, slicing and subletting is creating the father of all headaches don't give up! There are quite nice websites which do all the calculations mentioned here for you. One of these sites can be found here:

IP Calculator / IP Subnetting

 

What have we achieved so far

Good, after all this hard work of clicking and brain twisting network mask calculations the setup is finally done.

We configured security tags, automatically assigned them to the right VMs. Firewall rules will assure only allowed protocol communication from one security group to another.

The VMs and its software get installed by HEC, once the tags are assigned and the VMs are installed one is placed in a static and the other one is placed in a routed network. The routed network will be sliced by a subnet algorithm to only allow 14 devices, each WEB server will have its own DMZ.
After all that has been configured by HEC, the NSX security kicks in and our freshly deployed application will work like intended and only let MySQL queries reach the DB server. Also, HTTP / HTTPs queries from the internet can only reach our WEB server running in its very own "private" DMZ. All of this is created for each and every new application being deployed.

 

To Summarize

Wow, after all this clicking and configuring and calculating we do have a quite comprehensive blueprint, not only setting up a full service with a single mouse click, but also providing enterprise grade IT security for each and every deployment.

Not only through the firewall and security capabilities of NSX, but also through the flexible and purpose ready design of a micro DMZ per WEB server per service. This is an achievement which would be fairly difficult to reach without the capable technologies introduced by HEC.

 

If you want to see all this running, stay tuned for the next article in this series showing all of this working in our HEC Solution Centre environment which is located in the Netherlands in a wonderful small town called Zaltbommel...

Right, if Francois Zimmermann is in no mood for sharing his Heinz baked beans with the imminent threat of Doomsday, then fine, I will get my own tin of beans and get on with my survival strategy. This, you may recall from my previous blog, is about creating a multi-tier application blueprint using NSX with Hitachi Enterprise Cloud (HEC) which plays an important role in securing your data from potential everyday hackers let alone those in a Doomsday threat. This is where I get into the technical detail especially around micro-segmentation? More of that later.

 

Technical Alert!! If you are interested in a detailed explanation go to the “techies” part of this blog. If not, read on for the high level summary...

The “Micro” in Micro Segmentation

To create a more secure environment than a traditional DMZ we would have to change the DMZ from being traditional, monolithic and predefined structure into one that is more flexible, agile and dynamic.

Micro segmentation is a well used term when it comes to network virtualization, but what does it actually mean? It stands for a way to permit traffic from one instance to the other, even if they are on the same network with the same IP address.

 

You can think of micro-segmentation as isolation of workloads on the same network. As an example, typical networks are like trains, you can move from one carriage to another carriage within the same train easily. Micro-segmentation is more like cars on a highway. All are driving on the same highway in the same direction (more or less), but changing from one car to another while moving is almost impossible.

 

If you think of this network with a given IP subnet as a segment, typically every server within that segment can talk to each other. So traditionally, you had to separate an entire segment in order to prevent one server in segment A talking to another server in segment B.

MicroSegmentation_Off.png

While servers in the same segment, like Server A1 and Server A2 can directly “talk” to each other.

 

Now, in the software defined world this is all “snow from yesterday” (sorry – famous Austrian saying which means – Old News). With the new capabilities of dynamic firewalling and policy based VM security profiles, we could achieve a similar outcome without putting the servers in two different networks.

In this case, the firewall would be acting as if it might sit right in between the two servers, allowing only specific protocols to connect to the peer server. In some cases, communication can be terminated entirely to any peer server, which is often used in desktop environments.

A micro segmented network might look like this:

MicroSegmentation_On.png

Now in this case, Server A is only allowed to talk to Server B through a firewall, using specific ports to communicate. All other direct communication is prohibited. This makes the management easier since you can add a security layer even if servers are running in the same network. The big benefit is, that this security layer can be managed centrally and applied on demand to any group of servers.

 

So what does all of this mean for our “Micro DMZ” project? Everything!

 

The first step is to setup up one DMZ per service. A service might be any WEB server and DB server pair or similar. In a traditional datacentre you might use a static DMZ and place the WebServer there and then place the DB server in an internal network. But as described in part 1 of my blog series, there might be a more secure way of doing that.

And this is where “micro” comes into play. Instead of creating a big DMZ housing everything exposed to the internet, we are creating many small DMZs. One for each service. The service itself does not need to know anything about that, since the software defined infrastructure takes care of setting all the rules and routes in order to work properly.

Tip: If you want to see all the techy details and want to get a crash course in subnet calculations (what was /24 again?) visit this technical part of this blog

Now, when a new service is rolled out, it gets its very own DMZ and firewall rules. With the use of micro segmentation within the DMZ – web servers cannot talk to each other but they can talk to their DB server peers. This makes the DMZ itself more secure. Also, since each service has its own DMZ, a security breach will never affect other services, indeed it might very well only affect the very Webserver experiencing the security flaw.

 

With this technology, you can limit the impact of a security breach from being catastrophic to just being slightly annoying at best.

 

So are we now in Lock Down?

In a Doomsday scenario, instead of the rebels rushing into my shelter and stealing and breaking my stuff, they just get a glimpse of my security fence. If they manage to break through that, they see…

 

…wait for it…

 

Another security fence

 

The use of multiple DMZs and micro segmentation within those DMZs is enhancing the security layer significnatly. Everything is managed from a central instance so no micro management (pun intended) for the micro segmentation is needed. If we run through the technical part of this configuration and finish all of the step by step configuration items we are nearly done reaching the final solution. If the configuration of the blueprint is completed successfully, everything should automatically unfold in our Hitachi Enterprise Cloud solution, again, saving us a ton of time and effort for every new deployment of a service. Also, with every additional new service deployment the security is enhanced, not diminished!

 

Meanwhile, I’ve worked up an appetite so I need to crack into my stock of baked beans whilst the tests run. I’ll be back later with the results and take you through some seriously deep dive technical actions which makes the magic unfold and finally get us into secure lock down.

 

I wonder how my buddy, ole Mr"Get your own beans" is getting on with his NSX shelter? Is it secure enough to protect his services from the latest ransomware madness?

Which leaves me to ask, "What are you doing to enhance your security for your new or existing services?"