Skip navigation
1 2 3 Previous Next

Hu's Place

290 posts

Cloud adoption.png

Surveys by Gartner,IDG, and Right scale in 2018 leave no doubt that cloud adoption is mainstream. Public cloud adoption led the way increasing to 92% in the Right Scale survey. The same survey showed that 81% having a multi-cloud strategy, leveraging almost 5 clouds on average. 51 percent have a Hybrid cloud strategy, combining public and private clouds.

 

The number of respondents adopting private cloud is 75%. While the respondents in the survey ran 40% of their workloads in public cloud and 39 % in private cloud, enterprise customers ran fewer workloads in the public cloud, 32%, and more in Private clouds, 45%, which reflects their concern with security and safety of critical workloads. Private cloud adoption also increased with VMware vSphere with 50% adoption, followed by Open Stack with 24%.

 

Hybrid and Multi-cloud Benefits

Hybrid and multi-cloud strategies offer flexibility, scalability, and agility by providing the freedom to choose where and how to deploy workloads, without the complexity and delays of acquiring and deploying infrastructure and operations resources. Applications can burst out into a public cloud during peak periods. Hybrid and multi-cloud also provide flexible data management, governance, compliance, availability, and durability. It eliminates upfront capital costs and avoids the risk of infrastructure vendor lock-in. Another aspect of agility is the self-service resources that enables the DevOps culture to run dev/test workloads in the cloud. Another major benefit of public clouds is to geographically distribute apps and services especially as more applications gravitate to the edge.

 

Hybrid and Multi-cloud Challenges

A cloud is a computing model where IT services are provisioned and managed over the Internet in the case of public clouds or over private IT Infrastructure in the case of private cloud. Selecting an application and merely moving it to a cloud provider is typically a poor decision. It needs to be designed and built to take advantage of a cloud environment else it is likely to become more problematic. Public cloud providers typically develop highly specialized tools for monitoring, orchestration, cost management, security and more to suit the capabilities of their services. However, these tools may not map over to other clouds. In the case of Hybrid and multi-cloud, we are mixing up multiple clouds which increases operational and data management complexity. Operational policies and methods are different and aggregation of data across multiple clouds boundaries makes it difficult for governance, analytics, and business intelligence. When you have petabytes of data in one cloud, how long will it take you to switch to another cloud?

 

While the major cloud companies have security measures in place that probably exceed what most private companies can provide, they present a very visible target, and we must assume that nothing is fool proof. Security still remains a key concern for critical applications, especially when it comes to public clouds.

 

Cloud Changes in 2019

While lift and shift application migrations to clouds will continue in 2019, more applications will be modernized to take advantage of the new capabilities of containers, serverless, FPGAs, and other forms of computing. Competition between the leading cloud providers will increase, resulting in more services available for infrastructure as a service, integrations and open source, analytics, compliance, and hybrid cloud capabilities. With Microsoft running GitHub, open source is becoming the model for developing new technologies and cloud vendors will become more open to developer communities. Hybrid clouds become a battleground with Amazon Outpost delivering an on-premise server rack to deliver AWS cloud, and IBM acquiring Redhat to increase their relevance in the data center.

 

All these changes will require new skill sets in migrating, modernizing, and managing new cloud deployments. Cloud providers realize that customers need help migrating and implementing cloud solutions, so they have carefully qualified services partners that customers can trust with support or managed services.

 

Hitachi Vantara Hybrid and Multi-cloud capabilities

Hitachi Vantara provides support for private, hybrid, public and multi-cloud deployments. There are three major areas of support

  1. Cloud gate way for block, file and object storage with HNAS and HCP.
    HNAS provides a transparent data migrator for block and file data to private and public clouds and integrates with HCP object store.

    HCP, Hitachi Content Platform, lets users securely move data to, from, and among multiple cloud services, and better manage use cases including data governance, IoT, and big data. Read IDC’s 2018 Marketscape report on HCP and see how HCP addresses security, multi-cloud, and performance. Data can be moved from one cloud to another without the additional cost of reading from one cloud to write to another. Since HCP always creates two copies, a new copy can be created on a new cloud repository while the old repository is crypto shredded.


HCP also provides out of the box integration with the Pentaho data integration and analytics platform enabling the use of Pentaho to ingest, integrate, cleanse and prepare data stored in HCP-based data lake environments for analytics and visualization. While there is tight integration between Pentaho and HCP, it can actually be used to support an abstracted data architecture enabled by the separation of compute and storage. Specifically, while the data might reside in HCP, Pentaho’s adaptive execution functionality enables users to choose their preferred execution engine (such as the Apache Spark in-memory processing engine, as well as Pentaho’s open source Kettle ETL engine) at runtime. This functionality could be used to execute data processing in multiple cloud environments. The vehicle history information provider CARFAX is doing just that: deploying HCP to combine structured and unstructured data in a single environment and cleansing and preparing it to use Pentaho before sending it to AWS or Microsoft Azure for processing, as appropriate for a given application. Read the 452 Research report on this capability

 

HCP.png

2. On Premise Cloud Deployment with Hitachi Enterprise Cloud (HEC).

The Hitachi Enterprise Cloud portfolio of on-premises enterprise cloud managed services, are pre-engineered and fully managed, to achieve faster time to value, and get guaranteed business outcomes and service levels for mission-critical applications and workloads, within a traditional IT infrastructure, aDevOps architecture, a microservices architecture, or in some combination Hitachi Enterprise Cloud integrates implementation, deployment services, and cloud-optimized software and infrastructure to deliver rapid business value. The first solution of its kind, it also offers the ability to add container capabilities to support both traditional virtualized environments and born on the web applications.

 

HEC.png

 

3. Accredited and Certified REAN Cloud services.
REAN Cloud has expertise working with the hyperscale public clouds. They are a Premier Consulting Partner in
the Amazon Web Services (AWS) Partner Network (APN) and a Microsoft Azure Silver Partner. REAN Cloud offers
managed services and solutions for hyperscale-integrated IaaS and PaaS providers and is one the few systems
integrators capable of supporting the entire cloud services life cycle. Backed by extensive security DNA and deep
compliance IP and expertise, REAN Cloud specializes in helping enterprise customers that operate in highly
regulated environments – Financial Services, Healthcare/Life Sciences, Education and the Public Sector –
accelerate their cloud investments while extracting maximum value from use of the cloud itself.

 

REAN.png

Last year REAN Cloud acquired 47Lining to provide deep capabilities in cloud-based analytics and machine
learning that expands Hitachi Vantara’s ability to maximize data-driven value for vertical IoT solutions. This
April, 47Lining, announced its Amazon Web Services (AWS) Industrial Time Series Data Connector Quick
Start
. The Connector Quick Start allows companies to quickly and easily synchronize their industrial time
series data to AWS so they can perform advanced predictive and historic analytics using the full breadth of
AWS big data and partner services.

 

 

Summary

Hitachi Vantara recognizes the value of private, hybrid and public clouds and provides the tools and services to enable our customers to choose the right combination of cloud solutions for their specific requirements. Cloud is much more than just using a network of remote servers hosted on the internet to store, manage, and process data. Cloud is really about methodology, automation, financial models, software development approaches, and more.

 

We’ve known that for years as we have provided storage, converged, hyperconverged, and other infrastructure solutions to our customers deploying cloud on-premises for private clouds.Private clouds are not simply existing data centers running virtualized, legacy workloads. They require highly modernized digital application and service environments running on true cloud platforms like Hitachi Enterprise Cloud

 

With the introduction of Hitachi Enterprise Cloud (HEC) a few years back, as a Service offering, and recent Smart Data Center initiative, we are building out the components to support private cloud and connectivity to public clouds for hybrid cloud deployments. Hybrid clouds must bond together private and public clouds through fundamental technology, which enable the transfer of data and applications.

 

With the introduction of REAN, we have a public cloud portfolio to complement our existing services. This allows us to step away from the limiting descriptions of” private”, “public”, and “hybrid”. Many, if not most, of our customers are expecting to manage an incredibly diversified environment based on intellectual property, latency, security, and integration needs. The main point is that we offer customers a range of capability that only a very few can. The big 3 public cloud providers (AWS, Azure, and Google Compute) don’t provide the infrastructure for private cloud. Most infrastructure companies do not have solutions for the public cloud and make the argument for private clouds to protect their product portfolio. The global SI’s, like Accenture, don’t provide the hardware and software that we do. As customers look for a partner that can help with the massive diversity of the multiple cloud environments, we are the only partner that has the breadth of capabilities they are seeking.

From our friends at Merriam-Webster, the definition for “continuous” is:

adjective

con·tin·u·ous | \kən-ˈtin-yü-əs\

  1. Marked by uninterrupted extension in space, time, or sequence //The batteries provide enough power for up to five hours of continuous use.//
synonyms:continual, uninterrupted, unbroken, constant, ceaseless, incessant, steady, sustained, solid, continuing, ongoing, unceasing, without a break, permanent, nonstop, round-the-clock, always-on, persistent, unremitting, relentless, unrelenting, unabating, unrelieved, without respite, endless, unending, never-ending, perpetual, without end, everlasting, eternal, interminable

 

 

Understanding the definition of “continuous” is key because in this blog we will discuss “Continuous Business Operations” as it relates to those of you that must have continuous, uninterrupted access to data. You are running business applications that require strict zero RTO (Recovery Time Objective)/RPO (Recovery Point Objective) service levels, as these applications are mission critical and users *must* be able to access critical database information, despite datacenter failure due to catastrophic or natural causes. What types of businesses/services demand such levels of uptime? Think emergency call center services and medical/urgent care, and the impact it can have to the customers of their services – you, me, and our families… These are “life-critical” operations where the consequence of downtime could result in death.

 

Huyhn post.png


When operations are running on a database, two things can interrupt continuous operations; loss of a server or loss of the database. Oracle Real Application Clusters (RAC) provides customers with a clustered server environment where the data base is shared across a pool of servers, which means that if any server in the server pool fails, the database continues to run on surviving servers. Oracle RAC not only enables customers to continue processing database workloads in the event of a server failure, it also helps to further reduce costs of downtime by reducing the amount of time databases are taken offline for planned maintenance operations.

 

But what if the data base storage fails? This will take some time to recover, unless the data base is running on a virtual storage array that is supported by two physical storage arrays. With a virtual storage array, the data base can continue to run even when one storage array fails or is taken down for maintenance. This is a capability that is provided with the Global Active Device (GAD) feature of our VSP storage array. The combination of Oracle RAC and Hitachi GAD provides true continuous business operations.

 

Configuring, implementing, and managing such a system, with servers, switches, storage, Oracle RAC, and GAD, can take some expertise, time and effort. Hitachi Vantara can simplify this with a converged system specifically designed and validated for Oracle Real Application Clusters (RAC) databases and VSP GAD storage arrays. Our “Hitachi Solution for Databases – Continuous Business Operations” has three core components that enable your business with uninterrupted access to Oracle database:

 

Huyhn slide.png

  1. Hitachi Unified Compute Platform, Converged Infrastructure (UCP CI) – this is the core infrastructure that Oracle RAC and associated software is run on
    • Hitachi Advanced Server DS220/DS120
    • Hitachi Virtual Storage Platform F and G series
    • Brocade G620 Fibre Channel Switches
    • Cisco Nexus GbE Switches
  2. Hitachi Global-Active Device (GAD) for dual geographic site synchronous replication, and configuring Oracle RAC for extended distances
  3. Hitachi Data Instance Director– used for orchestration and simplified set-up and management of Hitachi global-active devices.


“Hitachi Solution for Databases – Continuous Business Operations” topology diagram follows: Huyhn diagram.png
Hitachi Data Instance Director (HDID) provides for the automatic setup and management of global-active device, avoiding hours of tedious manual work. It also handles the swap operation when a failure in one of the sites is detected. HDID can also interface with the Oracle instances to orchestrate non-disruptive application-consistent snapshots and clones, in either or both of the sites, to enable point-in-time operational recovery following a database corruption or malware event. Note in the above diagram, the “quorum site” is a dedicated GAD cluster and management device that is shared between both sites to provide traffic management and ensure data consistency.

 

Find out more about how Hitachi Vantara can help your company achieve continuous business operations by joining our upcoming webinar on February 20th 9:00 am PT/12:00 ET. Experts from Broadcom, ESG, and my colleague, Tony Huynh, who helped me with this post will share their insights with you:

webinar.png

Register for our February 20thwebinar here https://www.brighttalk.com/webcast/12821/348002?mkt_tok=eyJpIjoiTnpNek9XWTBPVE14WTJNeCIsInQiOiI5bk0yVmViMjFITDdWNk12NlcybkpwQ0dMOXRHQTNEOHVtUHFzUk1GK3VteTROOFVEXC81eFZ2MFwvTWxqUHhvbjVTOXpjdTZYXC9zNkNERVVFZjJCNml4UT09In0%3D Additional technical resources:

  1. Full reference architecture details
  2. ESG Lab Validation report
  3. What is Global-Active Device? – Video
Hu Yoshida

chúc mừng năm mới

Posted by Hu Yoshida Employee Feb 5, 2019

This morning I saw four wild pigs in my front yard. My dogs were going crazy, but they opted to stay inside. Wild pigs are not a common sight in my suburban neighborhood. I took this as an optimistic sign since this is the end of the dog year and the beginning of the year of the pig.

 

pig.png

 

According to Chinese astrology, 2019 is a great year for good fortune and a good year to invest! 2019 is going to be full of joy, a year of friendship and love for all the zodiac signs; an auspicious year because the Pig attracts success in all the spheres of life.

 

Wishing you all the best for the year of the pig and enjoy a year of friendship and love.

Lately I have been focused on the operational aspects of data, how to prepare data for business out comes and less about the infrastructure that supports that data. As in all things, there needs to be a balance, so I am reviewing some of the innovations that we have made in our infrastructure portfolio which contribute to operational excellence. Today I will be covering the advances that we have made in the area of hybrid-core architecture and its application to Network Attached Storage. This hybrid-core architecture is a unique approach which we believe will position us for the future, not only for NAS but for the future of compute in general.

 

FPGA Candy.png

The Need for New Compute Architectures

The growth in performance of non-volatile memory technologies such as storage-class memories, and the growing demand for intensive compute for graphics processors, analytics/machine learning, crypto currencies, and edge processing are starting to exceed the performance capabilities of CPU processors. CPUs are based on the Von Neumann architecture where processor and memory sit on opposite sides of a slow bus. If you want to compute something, you have to move inputs across the bus, to the processor. Then you have to store the outputs to memory when the computation completes. Your throughput is limited by the speed of the memory bus. While processor speeds have increased significantly, memory improvements, have mostly been in density rather than transfer rates. As processor speeds have increased, an increasing amount of processor time is spent idling, waiting for data to be fetched from memory. This is referred to as the Von Neumann bottleneck.

 

Field Programmable Gate Arrays

Hitachi has been working with combinations of different compute architectures to overcome this bottleneck for some time. One architecture is a parallel state machine FPGA (Field Programmable Gate Arrays). Hitachi has been working with FPGA technology, investing thousands of man hours in research and development, producing over 90 patents. Unlike a CPU which is an instruction stream processor that runs through the instructions in software, to access data from memory and move, modify, or delete it in order to accomplish some task, FPGA’s are a reconfigurable systems paradigm that is formulated around the idea of a data stream processor—instead of fetching and processing instructions to operate on data, the data stream processor operates on data directly by means of a multidimensional network of configurable logic blocks (CLBs) connected via programmable interconnects. Logic blocks compute a partial result as a function of the data received from its upstream neighbors, stores the result within itself and passes it downstream. In a data-stream based system, execution of a program is not determined by instructions, but rather by the transportation of data from one cell to another—as soon as a unit of data arrives at a cell, it is executed.

 

Today’s FPGAs are high-performance hardware components with their own memory, input/output buffers, and clock distribution - all embedded within the chip. In their core design and functionality, FPGAs are similar to ASICs (Application Specific Integrated Circuits) in that they are programmed to perform specific tasks at high speeds. With advances in design, today’s FPGAs can scale to handle millions of tasks per clock cycle, without sacrificing speed or reliability. This makes them ideally suited for lower level protocol handling, data movement and object handling. Unlike ASICs (that cannot be upgraded after leaving the factory), an FPGA is an integrated circuit that can be reprogrammed at will, enabling it to have the flexibility to perform new or updated tasks, support new protocols or resolve issues. It can be upgraded easily with a new firmware image in the same fashion as for switches or routers today.

 

Hitachi HNAS Incorporates FPGAs

At the heart of Hitachi's high performance NAS (HNAS) is a hybrid core architecture of FPGAs and Multicore intel processors. HNAS has over 1 million logical blocks inside its primary FPGAs, giving it a peak processing capacity of about 125 trillion tasks per second – an order of magnitude more tasks than the fastest general purpose CPU. Because each of the logic blocks is performing well-defined, repeatable tasks, it also means that performance is very predictable. HNAS was introduced in 2011 and as new generations of FPGAs increased the density of logic blocks, I/O channel and clock speeds, increasingly more powerful servers have been introduced.

 

FPGAs are not always better to use than multi-core CPU’s. CPU’s are the best technology choice for advanced functions such as higher-level protocol processes and exception handling, functions that are not easily broken down into well-defined tasks. This makes them extremely flexible as a programming platform, but it comes at a tradeoff in reliable and predictable performance. As more processes compete for a share of the I/O channel into the CPU, performance is impacted.

 

HNAS Hybrid-Core Architecture

Hitachi has taken a Hybrid-core approach, combining a multi-core Intel processor with FPGAs to address the requirements of a high performance NAS system. One of the key advantages of using a hybrid-core architecture is the ability to optimize and separate data movement and management processes that would normally contend for system resources. The HNAS hybrid-core architecture allows for the widest applicability for changing workloads, data sets and access patterns. Some of the attributes include:

  • High degree of parallelism
    Parallelism is key to performance. While CPU based systems can provide some degree of parallelism, such implementations require synchronization that limits scalability.
  • Off-loading
    Off-loading allows the core file system to independently process metadata and move data while the multi-core processor module is dedicated to data management. This provides another degree of parallelism.
  • Pipelining
    Pipelining is achieved when multiple instructions are simultaneously overlapped in execution. For a NAS system it means multiple file requests overlapping in execution.

Pipeline.png

Another advantage of the hybrid-core architecture is the ability to target functions to the most appropriate processing element for that task, and this aspect of the architecture takes full advantage of the innovations in multi-core processing. High-speed data movement is a highly repeatable task that is best executed in FPGAs, but higher level functions such as protocol session handling, packet decoding, and error / exception handling need a flexible processor to handle these computations quickly and efficiently. The unique hybrid-core architecture integrates these two processing elements seamlessly within the operating and file system structure, using dedicated core(s) within the CPU to work directly with the FPGA layers within the architecture. The remaining core(s) within the CPU are dedicated to system management processes, maintaining the separation between data movement and management. The hybrid core approach has enabled new programmable functions to be introduced and integrated with new innovations in virtualization, object store and clouds through the life of the HNAS product.

 

For us, it’s not just about a powerful hardware platform or the versatile Silicon file system; it’s about a unified system design that forms the foundation of the Hitachi storage solution. The HNAS 4000 integrally links its hardware and software together in design, features and performance to deliver a robust storage platform as the foundation of the Hitachi Family storage systems. On top of this foundation, the HNAS 4000 layers intelligent virtualization, data protection and data management features to deliver a flexible, scalable storage system.

 

The Basic Architecture of HNAS

The basic architecture of HNAS consists of a Management Board (MMB) and a Filesystem Board (MFB).

HNAS Architecture.png

File System Board (MFB)

The File System Board (MFB) is the core of the hardware accelerated file system. Responsible for core file system functionalities such as object, free space management, directory tree management etc. and Ethernet and FC handling. It consists of four FPGA s connected by Low Voltage Differential Signaling (LVDS), dedicated point to point, Fastpath connections, to guarantee very high throughput for data reads and writes. Each FPGA has dedicated memory for processing and buffers which eliminates memory contention between the FPGAs unlike a shared memory pool in a CPU architecture

  • Network Interface FPGA is responsible for all Ethernet based I/O functions
  • The Data Movement FPGA is responsible for all data and control traffic routing throughout the node, interfacing with all major processing elements within the node, including the MMB, as well as connecting to companion nodes within a HNAS cluster
  • The Disk Interface FPGA (DI) is responsible for connectivity to the backend storage system and for controlling how data is stored and spread across those physical devices
  • The Hitachi NAS Silicon File System FPGA (WFS) is responsible for the object based file system structure, metadata management, and for executing advanced features such as data management and data protection. It is the hardware file system in the HNAS. By moving all fundamental file system tasks into the WFS FPGA, HNAS delivers high and predictable performance
  • MFB coordinates with MMB via a dedicated PCIe 2.0 8-lane bus path (simultaneous 500MB/s per lane for send and 500MB/s for receive, per lane).

Management Board (MMB) The Management Board provides out-of-band data management and system management functions for the HNAS 4000. Depending on the HNAS model, the platform uses 4 to 8 core processors. Leveraging the flexibility of multi-core processing, the MMB serves a dual purpose. In support of the FPGAs in the File System Board, the MMB provides high-level data management and hosts the operating system within two or more dedicated CPU cores in a software stack known as BALI. The remaining cores of the CPU are set aside for the Linux based system management, monitoring processes and application level APIs. The MMB is responsible for

  • System Management
  • Security and Authentication
  • NFS, CIFS, iSCSI, NDMP
  • OSI Layer 5, 6 & 7 Protocols

 

A Growing Industry Trend.

The market for FPGAs has been heating up. Several years ago,Intel acquired Altera, one of the largest FPGA companies, for $16.7 Billion. Intel, the world largest chip company, has identified FPGAs as a mature and growing market and is embedding FPGAs into their chipsets. Today Intel offers a full range of SoC (System on Chip) FPGA product portfolio spanning from high-end to midrange to low-end applications.

 

Microsoft announced that it has deployed FPGAs in more than half its servers. The chips have been put to use in a variety of first-party Microsoft services, and they're now starting to accelerate networking on the company's Azure cloud platform. Microsoft's deployment of the programmable hardware is important as the previously reliable increase in CPU speeds continues to slow down. FPGAs can provide an additional speed boost in processing power for the particular tasks that they've been configured to work on, cutting down on the time it takes to do things like manage the flow of network traffic or translate text.

 

Amazon now offers an EC2 F1 instance, which use FPGAs to enable delivery of custom hardware accelerations. F1 instances are advertised to be easy to program and come with everything you need to develop, simulate, debug, and compile your hardware acceleration code, including an FPGA Developer AMI (An Amazon Machine Image is a special type of virtual appliance that is used to create a virtual machine within the Amazon Elastic Compute Cloud. It serves as the basic unit of deployment for services delivered using EC2and supporting hardware level development on the cloud). Using F1 instances to deploy hardware accelerations can be useful in many applications to solve complex science, engineering, and business problems that require high bandwidth, enhanced networking, and very high compute capabilities.

 

FPGA Developments in Hitachi Vantara

Hitachi Vantara, with its long experience with FPGAs and extensive IP portfolio is continuing several active and innovative FPGA development tracks along similar lines as those explored and implemented by Microsoft and Amazon.

 

Hitachi provides VSP G400/600/800 with embedded FPGAs that tiers to our HCP object store or to Amazon AWS and Microsoft Azure cloud services. With this Data Migration to Cloud (DMT2C) feature, customers can significantly reduce CAPEX by tiering “cold” files from their primary Tier 1 VSP Hitachi flash storage to lower cost HCP or public cloud services. Neil Salamack’s blog post explains the benefits that this provides for Cloud Connected Flash – A Modern Recipe for Data Tiering, Cloud Mobility, and Analytics

 

Hitachi has demonstrated a functional prototype running with HNAS and VSP to capture finance data and report on things like currency market movements, etc. Hitachi has demonstrated the acceleration of Pentaho functions with FPGAs, and presented FPGA enabled Pentaho-BA as a research topic at the Flash memory summit. Pentaho engineers have demonstrated 10 to 100 time faster analytics with much less space, much less resources, and at a fraction of the cost to deploy. FPGAs are very well suited for AI/ML implementations and excel in deep learning where training iterative models may take hours or even days while consuming large amounts of electrical power.

 

Hitachi researchers are working on a software defined FPGA accelerator that can use a common FPGA platform on which we can develop algorithms that are much more transferable across workloads. The benefit will be the acceleration of insights on many analytic opportunities, many different application types, and bring

things out to market faster. In this way we hope to crunch those massive data repositories and deliver faster business outcomes and solve social innovation problems. It also means that as we see data gravity pull more compute to the edge, we can vastly accelerate what we can do in edge devices with less physical hardware because of the massive compute and focused resources that we can apply with FPGAs.

 

Hitachi has led the way and will continue to be a leader in providing FPGA based innovative solutions

Mr. Toshiaki Higashihara, president and CEO of Hitachi, Ltd., opened our annual NEXT event last September speaking on Data and Innovation. He talked about this in terms of Lights and Shadow, the rewards and risks of the digital age. “The light gives us opportunity,” he said. “And we should not discount the shadow of cyber-attacks, and security and data breaches.” He continued, “Hitachi is aligned with the greater demands we face today. Our mission is clear: to contribute to society through the development of superior, original technology and products.”

 

ligtht and shadow slide b.png

 

Mr. Higashihara’s analogy of light and shadow touches on an increasing concern for all of us. The deadline for GDPR (General Data Protection Regulation) implementation in May of last year along with a growing number of cyber hacks, fake news, governance and risk management issues has drawn more focus on corporate responsibility. The financial impact is already being felt as a French data protection watchdog fines Google $57m under GDPR. I included this concern in my top 5 Trends for 2019 blog post and discussed this with Shawn Rosemarin, our SVP and CTO of Global Field and Industry in our Top 5 Trends webinar.

 

Shadow is a good analogy for risk. Shadows consist of two components, blockage of light and a surface to project on. The surface is still there but the perception is altered. Last Sunday, in North America, we had a “Blood Moon”, where the earth cast its shadow onto the surface of the moon at such a spectral angle that it appeared to us as a “Blood Moon”

Blood moon.png

 

Shadows are many shades of darkness, and the outlines are often blurry. The object may be clear cut as a regulation which could lead to rules for compliance. However, simply being compliant may not keep us out of the shadows, especially if we are dealing with new technologies and new business models which may not have any precedence.

 

Compounding this is the globalization of business where corporations operate in many different political and cultural environments. Corporations have also become more diversified and dependent on third parties as part of their supply chain, manufacturing, distribution, sales and services. Contractors are also a vital part of a corporation’s workforce as new skill sets are required. An acquisition’s corporate practices and security measures must be vetted to ensure that that the corporation is not acquiring new sources of risk as part of the acquisition. All of these factors can help a corporation be successful, but they also introduce risk to the corporation’s security and the responsible use of the corporation’s assets.

 

Corporate responsibility is also not just about satisfying the regulators and stockholders, corporations are also being judged by social media. What we say and do as well as what we don’t say or do is subject to review by a connected society with a tremendous amount of power at their fingertips. Our success as a company will also depend on how our conduct is perceived by the online society.

 

So how does a company meet the evolving challenges of corporate responsibility? At the risk of over-simplification, I believe that corporate responsibility must be based on some key elements:

 

Sustaining Corporate Principles. Since 1910 Hitachi’s corporate philosophy has been based on Harmony, Sincerity, and pioneering spirit.

 

Corporate Leadership. The commitment to Corporate responsibility starts at the top as demonstrated by Mr. Higashihara and Brian Householder our Hitachi Vantara CEO.

 

Corporate Culture. Shared values and beliefs that are rooted in the corporation’s goals strategies, organization, and approach to employees, customers, partners, investors and the greater community. At Hitachi Vantara we work to a double bottom line, to deliver outcomes that benefit business and society.

 

Clear Guidelines and Education. Last October was the deadline for Hitachi Vantara employees to complete certification on Data Privacy, Cyber Security, Ethics, and Avoiding Harassment. This was not a check mark exercise. This required 100% participation, which is almost impossible to enforce for 6000+ globally dispersed employees with different work and personal schedules. But 100% was accomplished through the direct involvement of the corporate executive committee.

 

Technology. The use of technology to not only secure and protect our data and systems but to also understand the data that we acquire so that we can treat it responsibly and monitor the workflow to guide its use.

 

Mr. Higashihara is looking beyond the concerns that I mention here, and his shadow extends to include digital divide and concerns about singularity.

 

Digital Divide is a social issue referring to the differing amount of information between those who have access to the internet and those who do not or have limited access.

 

Singularity is the concern that AI or super intelligence will continue to upgrade itself and advance technology at such a rate that humans would become obsolete. The physicist Stephen Hawking warned that the emergence of artificial intelligence could be the “worst event in the history of our civilization.” Can you teach a machine ethics, morality and integrity?

 

Mr. Higashihara closed his presentation at NEXT 2018, by reaffirming our commitment to Social Innovation.

 

Commiutment t SI.png

 

Plan to attend our NEXT 2019 event, October 8-10, 2019 at the MGM Grand, Las Vegas to hear from our leaders and experts and our customers and partners on how they are delivering new value to business and society with responsible innovation.

Storage Innovation.png

2018 was a very busy year for Hitachi Vantara. September marked the one year anniversary of Hitachi Vantara which was formed by the integration of three Hitachi companies, Hitachi Data Systems, an IT infrastructure systems and services company; Hitachi Pentaho, a data integration and analytics company; and the Hitachi Insight Group, developer of Lumada, Hitachi’s commercially available IoT platform. This new company will unify the operations of these three companies into a single integrated business as Hitachi Vantara to capitalize on Hitachi’s social innovation capability in both operational technologies (OT) and information technologies (IT).

 

When the formation of Hitachi Vantara was announced, it was clear that combining Hitachi’s broad expertise in OT (operational technology) with its proven IT product innovations and solutions, would give customers a powerful, collaborative partner, unlike any other company, to address the burgeoning IoT market.

 

For lack of similar capabilities, some of our competitors began implying that we would no longer be focused on the innovative data infrastructure, storage and compute solutions that were the hallmark of Hitachi Data Systems. In fact, the closer collaboration between data engineers and data analysts from Pentaho, the data scientists from Insights with the proven software and hardware engineering skill of Hitachi Data Systems has given Hitachi Vantara even more talent and resources to drive innovation in storage systems.

 

During 2018 we proved our detractors wrong in many ways with the introduction of a new family of all flash and hybrid VSP storage systems which could scale from a low end 2U form factor to the largest enterprise storage frame with the same high end enterprise capabilities, including Global Active/Active availability, non-disruptive migration, multi-data center replication, and 100% data availability guarantee. A common Storage Virtualization Operating System SVOS RF, enables the “democratization” of storage services. A midrange user now has access to the same, super-powerful features as the biggest banks. Or, as a large company, you can deploy the same super-powerful features in remote, edge offices as you do in the core, using a lower cost form factor.

 

A Common Architecture from Midrange to Mainframes

Analysts like Gartner acknowledge that sharing a common architecture and management tools from the smallest VSP G200 to the flagship VSP G1500 provides users with an easy to follow upgrade path that leverages their investments in training policy and procedures. It also leverages Hitachi's investments in infrastructure monitoring and management tools, as well as ecosystem certifications and plug-ins. 2018 saw competitive storage vendors follow suit by announcing their intent to consolidate 3 to 5 disparate storage systems just to have a common storage system for the midrange. Since many of these storage systems were acquisitions, this consolidation effort will be a 2 to 5 year journey, with questions about migration between the systems. Hitachi Vantara is ahead of the pack delivering, a common storage platform from small midrange to large enterprise and mainframes system without any limitations in capabilities.

 

Open REST API

Another area of storage innovation was the introduction of AI and automation tools for the smart data center which was enabled by an open REST API. A REST API is built directly into our VSP storage controllers. We increased the memory and CPU in the controller specifically to support the REST API running natively in the controller. This gives us the opportunity to not only connect with other vendor’s management stacks, but also apply analytics and machine learning and automate deployment of resources through REST APIs.

 

Analyze Infrastructure Metrics from Servers to Storage

Hitachi Vantara has developed an analytics tool, Hitachi Infrastructure Analytics Advisor (HIAA) that can provide predictive analytics by mining telemetry data from servers, storage appliances, networking systems and virtual machines to optimize performance, troubleshoot issues and forecast when a business may need to buy new storage systems. Based on an analysis of metrics from host servers to storage resources the analytics tool can determine the right actions to take, then launch into an automation tool to invoke the appropriate services to execute that action.

 

Automate the Delivery and Management of IT Resources

The automation tool, Hitachi Automation Director (HAD), contains a catalog of templates that can automatically orchestrate the delivery and management of IT resources. The analytics tool communicates with the automation tool, through a REST API, to select a template, fill in the parameters and request deployment of resources, which is done automatically. The APIs are open, and Hitachi Vantara provides a design studio and developer community site for customers and third parties to design their own templates to fit their own environment, operation policy and workflow. Our customers love the ability to integrate Automation Director with their ServiceNow tickets for speedier resolution of the client’s requests.

 

Enhance Flash performance, Capacity and Efficiency

The common Storage Virtualization Operating System, SVOS, has been greatly enhanced from previous versions of the operating system, and has been renamed SVOS RF, where RF stands for Resilient Flash. SVOS-RF’s enhanced flash-aware I/O stack includes patented express I/O algorithms and new, direct command transfer (DCT) functionality to streamline I/O. Combined, these features lower latency up to 25% and increase IOPS per CPU core up to 71%, accelerating even the most demanding workloads. Quality of service (QoS) makes sure workloads have predictable performance for better user experiences. User selectable, adaptive data reduction with deduplication and compression, can reduce capacity requirements by 5:1 or more depending on the data set. A Total Efficiency Guarantee from Hitachi Vantara can help you deliver more storage from all-flash Hitachi Virtual Storage Platform F series (VSP F series) arrays and save up to 7:1 in data efficiency.

 

Summary

If there was any doubt that Hitachi Vantara would continue to be a storage systems leader, that should have been proven wrong in 2018. 2019 will provide even more proof points. Hitachi Vantara will continue to drive data center modernization with high-performance integrated cross-platform storage systems, AI-powered analytics, IT automation software, and the best in Flash efficiency. While these storage announcements were made in May of 2018, they did not make the cut off dates for Gartner's 2018 Magic Quadrant or Critical Capabilities for Solid State Arrays and General-Purpose Disk Arrays. So look for their evaluation in Gartner's 2019 reports. However, even without an evaluation of these new capabilities Hitachi Vantara did well in the Gartner 2018 reports and other Industry recognition reports.

 

Industry Recognition

In this post I will take a deeper dive into one of the key enablers for Digital Transformation, the REST API. I will cover our strategy for utilizing it in our products and provide some example of how it is utilized to enable the Smart Data Center.

 

Application Program Interface, API, is software that allows applications to talk to each other. APIs have been an essential part of software development since the earliest days of programming. Today, modern, web based, open APIs are connecting more than code. APIs are a key driver for digital transformation where everything and everyone is connected. APIs support interoperability and design modularity and help improve the way systems and solutions exchange information, invoke business logic, and execute transactions.

 

API Picture.png

Hitachi’s developers are reimagining core systems as microservices, building APIs using modern RESTful architectures, and taking advantage of robust, off-the-shelf API management platforms. REST stands for “representational state transfer.” APIs built according to REST architectural standards are stateless, which means that neither the client nor the server need to remember any previous state to satisfy it. Stateless components can be freely redeployed if something fails, and they can scale to accommodate load changes. REST enables plain-text exchanges of data assets. It also makes it possible to inherit security policies from an underlying transport mechanism. REST APIs provide a simplified approach to deliver better performance and faster paths to develop, deploy, and organize. Restful APIs are available in our Hitachi Content Platforms, Pentaho analytics, Hitachi Unified Compute Converged, Hyper Converged, and Rack platforms, REAN cloud, and LUMADA which is our IoT platform.

 

A REST API is built directly into our VSP storage controllers. We increased the memory and CPU in the controller specifically to support the REST API running natively in the controller. This gives us the opportunity to not only connect with other vendor’s management stacks, but also apply analytics and machine learning and automate deployment of resources through REST APIs. Here are some examples of how this API strategy brings operational benefits to the Smart Data Center.

 

Infrastructure Analytics

Hitachi Vantara has developed an analytics tool, Hitachi Infrastructure Analytics Advisor (HIAA) that can provide predictive analytics by mining telemetry data from servers, storage appliances, networking systems and virtual machines to optimize performance, troubleshoot issues and forecast when a business may need to buy new storage systems. There are 77 performance metrics that we can provide via REST API over IP connections. Based on an analysis of these metrics the analytics tool can determine the right actions to take, then launch into an automation tool to invoke the appropriate services to execute that action.

HIAA.png

Automation

The automation tool, Hitachi Automation Director (HAD), contains a catalog of templates that can automatically orchestrate the delivery and management of IT resources. The analytics tool communicates with the automation tool, through a REST API, to select a template, fill in the parameters and request deployment of resources, which is done automatically. During the execution, the automation tool may need to communicate with third party switches, virtual machines, containers or public cloud through their APIs. When one considers all the tedious steps required to request and deploy storage, networking, hypervisor, and application services for hundreds or even thousands of users, you can see how automation can reduce days of work downs to minutes.

HAD.png

Hitachi Automation Director has a catalog of canned application templates which we are continuing to expand. This internal “App” store” of packages includes Hitachi Storage Orchestration, Provisioning, Flash Module Compression (FMC) Optimization, Creation of a Virtual Storage Machine (VSM) that spans two physical storage systems for active/active availability, Replication (2DC, 3DC, GAD), SAN Zoning: Brocade BNA, Cisco DCNM, Oracle DB Expansion, VMWare Datastore Life Cycle Management, and Plugins/Utilities:CMREST (Configuration Management Rest API), JavaScript, OS, VM, OpenStack, AWS, etc.

 

Policy based Copy Management

Since most data is backed up and copied; a copy data management platform is available to simplify creating copies and managing policy-based workflows that support business functions with controlled copies of data. Hitachi Vantara provides a Hitachi Data Instance Director which the automation tool can invoke to deploy the copy workload and set up and enforce data protection SLA policies through REST APIs.

HDID.png

IT ServiceManagement

Hitachi Automation Director’s REST API is open and available for working with third party resources. Enhancements to the software include integration with IT service management (ITSM) tools, including the ServiceNow platform, for better resource tracking and improved REST API integration for working with third-party resources. Hitachi Automation Director creates workflows in Service Now, approval can be administrator driven or driven by the Automation Director. The Automation Director executes the changes and updates the ticket.

Service Now.png

Third Party and Home Grown Services

Hitachi Automation Director encourages working with third party services by providing a design studio, and a developer community site. Service Builder is the design studio where users have the flexibility to create their own service template to fit their own environment, operation policy and workflow. They are provided the capability to leverage 3rdparty or home grown tools.

 

Service Builder.png

Hitachi Vantara has launched the Hitachi Automation Director (HAD) Developer Community site.

It is available to external users. Here Hitachi Vantara shares sample service templates(more than 30 content packs), prototypes, how to use Hitachi Automation Director, Q&A, etc. Here we will be collaborating with customers / partners to develop more content.

 

Call Home Monitoring

Other uses of REST APIs include our call-home monitoring system, Hi-Track. which has been re-coded to use our native REST APIs to collect information about the storage systems and report that back to our support teams. Hi-Track provides 24/7 monitoring for early alerting and insight to help your storage system run at peak efficiency. Only authorized Hitachi Vantara support specialists may establish a connection with your site, and only by using the Hitachi Vantara internal network. Secure access with encryption and authentication keeps error and configuration information tightly controlled, and your production data can never be accessed.

 

Container Plug-In

We have a Hitachi Storage Plug-in for Containers that integrates us with Docker and thereby with Kubernetes and Docker Swarm. This Plug-in is built on the REST API that is also available to customers to integrate with. This plug- in retains the state of the storage as containers are spun up and down. Without this, the storage for a container would disappear when the container goes away.

 

VSP Configurator

The VSP storage configuration tool, Hitachi Storage Advisor, can be accessed through software on an external virtual or physical server via the REST API.

 

Summary

The use of REST APIs is key to the integration of infrastructure, software, and analytics to create an intelligent data center. This is a summary of the primary benefits of our API strategy for an intelligent data center.

 

A Rest API built directly in our VSP controller provides connection with other vendor’s management stacks, and enables the application of analytics and machine learning for automated deployment of resources

 

An analytics tool, Hitachi Infrastructure Analytics Advisor, can provide predictive analytics by mining telemetry data from servers, storage appliances, networking systems and virtual machines to optimize performance, troubleshoot issues and forecast when a business may need to buy new storage systems.

 

An automation tool, Hitachi Automation Director, with a catalog of templates that can automatically orchestrate the delivery and management of IT resources.

 

A copy data management platform, Hitachi Data Instance Director, which can be invoked by the automation tool to simplify creating copies and managing policy-based workflows that support business functions with controlled copies of data.

 

Hitachi Automation Director’s REST API is open and available for working with third party resources like IT service management (ITSM) tools, including the ServiceNow platform, for better resource tracking and improved integration with third-party resources.

 

Hitachi Automation Director encourages working with third party services by providing a design studio, and a developer community site where users have the flexibility to create their own service template to fit their own environment, operation policy and workflow.

 

Other uses include call home monitoring, container plug-ins, and VSP configuration management from external systems. The list of plug-ins, utilities, and extensions will grow as the digital data center eco system grows.

 

Customer Benefits

Reduce workloads from days to minutes

Reduce errors resulting from tedious manual work

Reduce the need for skilled IT staff

Optimize use of IT resources

Increase speed of resolution to customer requests

Effective Outage Management with quicker return to service

Customize to fit their specific environment

Improve forecasting of future resource requirements

 

Nathan Moffit, Hitachi Vantara senior director of infrastructure, sums up our API strategy as Follows:

“Hitachi’s management strategy is based around the idea of a shared and open API architecture that allows us to simplify transmission of data across our suite of management tools & 3rdparty tools. Everything we have is API-based so that we can draw information in from other sources to create a more intelligent solution, but we can also pass information out if we’re not the master in the environment, so we can make other things smarter. In addition, our goal is to enrich our ‘library’ of 3rdparty device API information so that we can capture analytics from a broad range of devices & interact with them. We are taking a very vendor-neutral approach as we recognize that there is a much broader opportunity to deliver better solutions if we integrate with more vendors and partners.”

Hu Yoshida

Five Trends for 2019

Posted by Hu Yoshida Employee Jan 3, 2019

Happy New Year and welcome to 2019, a year full of possibilities.

 

New Years 2019.png

 

2018 was a year of maturity for Digital Transformation, and most companies are committed to transforming their companies. They have laid out their strategies and are allocating resources to this transformation. Public cloud, agile methodologies and devops, RESTful APIs, containers, analytics and machine learning are being adopted. Against this backdrop there are five trends for 2019 that I would like to call out.

 

Trend 1. Companies Will Shift from Data Generating to Data Powered Organizations

 

A 2017 Harvard Business Review article on Data Strategy noted “ Cross-industry studies show that on average, less than half of an organization’s structured data is actively used in making decisions—and less than 1% of its unstructured data is analyzed or used at all.”Deployments of large data hubs have only resulted in more data silos that are not easily understood, related, or shared. In order to utilize the wealth of data that they already have, companies will be looking for solutions that will give comprehensive access to data from many sources. Data curation will be a focus to understand the meaning of the data as well as the technologies that are applied to the data so that data engineers can move and transform the essential data that data consumers need to power the organization. More focus will be on the operational aspects of data rather than the fundamentals of capturing, storing and protecting data. Meta data will be key, and companies will look to object based storage systems to create a data fabric as a foundation for building large scale flow based data systems.

 

Trend 2: AI and Machine Learning Unleash the Power of Data to Drive Business Decisions

AI and machine learning technologies can glean insights from unstructured data, connect the dots between disparate data points, and recognize and correlate patterns in data such as facial recognition. AI and machine learning are becoming widely adopted in home appliances, automobiles, plant automation, and smart cities. However, from a business perspective, AI and machine learning has been more difficult to implement as data sources are often disparate and fragmented and much of the information generated by businesses has little or no formal structure. While there is a wealth of knowledge that can be gleaned from business data to increase revenue, respond to emerging trends, improve operational efficiency and optimize marketing to create a competitive advantage, the requirement for manual data cleansing prior to analysis becomes a major roadblock. A 2016 Forbes article published a survey of data scientists which showed that most of their time, 80%, is spent on massaging rather than mining or modeling data.

Data Scientist work.png

In addition to the tasks noted above one needs to understand that data scientists do not work in isolation. They must team with engineers and analysts to train, tune, test and deploy predictive models. Building an AI or machine learning model is not a one-time effort. Model accuracy degrades over time and monitoring and switching models can be quite cumbersome. Organization will be looking for orchestration capabilities like Hitachi Vantara’s Pentaho’s data integration and machine learning orchestration tools, to stream line the machine learning workflow and enable smooth team collaboration.

 

Trend 3: Increasing Data Requirements Will Push Companies to The Edge with Data

Enterprise boundaries are extending to the edge – where both data and users reside, and multiple clouds converge. While the majority of the IoT products, services, and platforms are supported by cloud-computing platforms, the increasing high volume of data, low latency and QoS requirements are driving the need for mobile cloud computing where more of the data processing is done on the edge. Public clouds will provide the connection between edge and core data centers creating the need for a hybrid cloud approach based on open REST or S3 App integration. Edge computing will be less of a trend and more of a necessity as companies seek to cut costs and reduce network usage. The edge will require a hardened infrastructure as it resides in the “wild” outside the protection of cloud/data center walls.

 

Trend 4: Data Centers Become Automated

 

The role of the data center has now changed from being an infrastructure provider to a provider of the right service at the right time and the right price. Workloads are becoming increasingly distributed, with applications running in public and private clouds as well as in traditional enterprise data centers. Applications are becoming more modular, leveraging containers and microservices as well as virtualization and bare metal. As more data is generated, there will be a corresponding growth in demand for storage space efficiency. Enterprises need to make the most of information technology—to engage with customers in real time, maximize return on IT investments and improve operational efficiency. Accomplishing this requires a deep understanding of what is happening in their data centers to predict and get ahead of trends, as well as the ability to automate action so staff are free to focus on strategic endeavors. A data center is like an IoT microcosm, every device and software package have a sensor or log and is ripe for the application of artificial intelligence (AI), machine learning and automation to enable people to focus on the business and not on infrastructure.

 

As a provider of data center analytics and automation management tools, Hitachi Vantara realizes that a data center is made up of many different vendor products that interact with each other. Therefore, automation must be based on a shared/open API architecture that allows us to simplify transmission of data across our suite of management tools & 3rdparty tools. Everything we have must be API-based so that we can draw information in from other sources to create a more intelligent solution, and we can also pass information out if we’re not the master in the environment, so we can make other things smarter. In addition, our goal is to enrich our library of 3rdparty device API information so that we can capture analytics from a broad range of devices & interact with them. We are taking a very vendor-neutral approach as we recognize that there is a much broader opportunity to deliver better solutions if we integrate with more vendors and partners.”

 

Trend 5: Corporate Data Responsibility Becomes a Priority

 

The implementation of GDPR in 2018 has focused attention on Data Privacy and required companies to make major investments on compliance. All international companies that are GDPR compliant now have a data protection officer (DPO) in an enterprise security leadership role. Data protection officers are responsible for overseeing a data protection strategy and implementation to ensure compliance with GDPR requirements.

 

The explosion of new technologies and business models are creating new challenges as companies are shifting from being data generating to data powered organizations. Big Data systems and analytics are becoming a center of gravity as business realize the power of data to increase business growth and better understand their customers and markets. This has been fueled by the advances in technologies to gather data, integrate data sources, search, and analyze data to derive business value. The most powerful companies in the world are those who understand how to use the power of data. Relative new comers like Amazon, Baidu, Facebook, and Google have achieved their prominence through the power of data. However, with great power comes great responsibilities.

 

IT must provide the tools and processes to understand their data and ensure that the use of that data is done responsibly. In my previous blog I describe how Hitachi Vantara approaches Corporate Data Responsibility in the development of our products for storage, encryption, content management, AI and video analytics.

 

These trends represent my own thoughts and should not be considered representative of Hitachi or Hitachi Vantara.

 

Please tune into a webinar on Thursday January 17 at 9:00am Pacific time where I will discuss these 5 trends with Shawn Rosemarin, SVP and CTO, Global Field and Industry Solutions. I am delighted to have Shawn join me to share his unique perspectives around these trends. Register now by clicking on this LINK

This year, December, marks the 70th anniversary since the adoption and proclamation of the United Nation’s Universal Declaration of Human Rights. On this occasion, Hitachi President and CEO, Toshiaki Higashihara took the opportunity to tell everyone working for Hitachi Group of the importance of our business activities with respect to human rights.

 

Human Rights.jpg

This message comes at a very appropriate time in our public conscience. While the Universal Declaration of Human Rights is 70 years old, it is very relevant today. The implementation of GDPR, the European privacy regulations speaks directly to article 12 of the Universal Declaration of Human rights regarding the “interference of one’s privacy”. Other rights that are in the news today are article 14, “the right to seek asylum from persecution in other countries” and “the right for equal pay for equal work” in article 23 which is a key to gender equality. Article 5 which says that “no one shall be subject to inhuman, cruel, or degrading treatment” is the essence of the #metoo movement.

 

While we have made progress with human rights. It is obvious that some things have not changed in the past 70 years. In fact, the explosion of new technologies and business models may be creating new challenges as companies are shifting from being data generating to data powered organizations. Big Data systems and analytics are becoming a center of gravity as business realize the power of data to increase business growth and better understand their customers and markets. This has been fueled by the advances in technologies to gather data, integrate data sources, search, and analyze data to derive business value. The most powerful companies in the world are those who understand how to use the power of data. Relative new comers like Amazon, Baidu, Facebook, and Google have achieved their prominence through the power of data. However, with great power comes great responsibilities.

 

Hitachi Vantara is in the business of unlocking the value of our customers data. We develop and deliver the technologies that empower our customer’s data strategies, and we are mindful of the responsibility that this requires. Our first tenant is that customers must own their data, not us. Our storage systems are designed to separate control data from user data so that we can maintain the systems using the control data without touching user data. We provide all the tools to store the data on the edge, in the core and in the cloud, and encrypt it, but the customer owns the encryption keys. Our content management platform comes with content intelligence tools to help customers monitor and enforce requirements for privacy. We go above and beyond the functional application requirements to ensure privacy. We have partnered with Anonos to pseudonymize data and derisk data to enable analytics, AI, and data sharing. Our video analytics provides pixilation of the entire body so there is no personally identifying information revealed and yet the video can still be used for insights and alerts without compromising privacy.

 

Higashihara refers us to the “the Hitachi Group Human Rights Policy” based on the United Nations United Nations (UN) Guiding Principles on Business and Human Rights which was adopted in 2011. This policy declared our determination as a business to respect the human rights of various stakeholders. His message this year reminds us of the importance of having high ethics and conscious awareness of privacy and human rights when working on new technologies and businesses such as AI and Big Data.

 

December marks the end of 2018. This is a busy time with year end business activities, holiday festivities, and family and friends. Please take time to remember our responsibilities to ensure the protection of human rights.

 

Wishing you peace and happiness during this holiday season and throughout the new year.

GAD Brains.png

The days when data recovery solutions were evaluated on how they could minimize the two R’s are over. The two R’s stand for Recovery Point Objective, RPO, how much new or changed data is lost because it hasn’t been backup yet, and Recovery Time Objective, RTO, how long it takes to resume operations. The reason I say this is because we can achieve zero RPO and zero RTO with today’s Global-Active Device on our family of Hybrid G Series and All Flash F Series Virtual Storage Platform, VSP, storage arrays.

 

The Global-Active Device (GAD) is another virtualization capability of the VSP storage platform that creates a virtual storage machine. A virtual storage machine virtualizes two separate VSP storage arrays and makes them appear as one storage array to a host server or a cluster of host servers. GAD is configured so that the primary and secondary storage systems use the actual information of the primary storage system, and the global-active device primary and secondary volumes are assigned the same virtual LDEV number in the virtual storage machine. This enables the host to see the pair volumes as a single volume on a single storage system, and both volumes receive the same data from the host. When a write is done to one of the GAD pair volumes, the data is replicated to the other pair and acknowledged before a write complete is returned to the initiator of the write. That keeps the volumes in synch and ensures zero RPO and zero RTO in the event of a storage system or site failure.

 

The virtual storage machine can span across physical storage systems that are separated by up to metro distances (500 kilometers). GAD provides non-disruptive, high availability (HA), disaster recovery (DR), and rapid data center migration services. In addition, it enables painless virtual machine storage motion where the location of the storage underlying virtual machines is moved between storage environments for load balancing or maintenance. GAD supports active-active processing of shared data, meaning that all interfaces to the storage system are always active, and the system synchronizes writes across the entire storage system. GAD can be configured in a single or clustered server configuration.

 

GAD Config.png

 

Some vendors might use their synchronous replication product to create a synchronized copy pair. This creates an Active/Passive pair on separate storage arrays. Processing occurs on the primary side of the pair and the passive side is only used for fail-over. This does not provide active-active processing of shared data. In order to achieve active-active processing, GAD provides three key features that distinguishes it from synchronous replication products.

 

  1. The first is a streamlined locking mechanism between the storage systems to synchronize the writes.
  2. The second is preservation of SCSI reserves, a control mechanism for avoiding conflicts or congestion with different host initiators.
  3. The third is a quorum disk to resolve conflicts between the active-active pair of storage systems.

 

The quorum disk is an external storage system that acts as a heartbeat for the GAD pair, with both storage systems accessing the quorum disk to check on each other. A communication failure between systems results in a series of checks with the quorum disk to identify the problem for the system to be able to receive host updates. The external quorum disk is accessed like a virtualized external disk which means that it can be a third party disk. It can also be on a virtual or physical server that has software to present itself as a disk. The quorum disk can also be in the cloud through the use of iSCSI. However, if GAD is used simply for non-disruptive migration, no quorum disk is needed.

 

GAD is a copy technology and can be one of multiple overlapping data copy technologies that are used in a production system. Backup and point in time copies are still required to protect against data corruption caused by errors or malicious attacks. While GAD provides protection in a metro area, an asynchronous, out of region replication with Hitachi Universal Replicator (HUR) might also be required in addition to GAD. Copy technologies include synchronous and asynchronous copies, clones, snap shots, and thin images or virtual copies. Copies are not only used for operational and disaster recovery, but also for archive, big data, analytics, audits, e-discovery, development and test, and other data repurposing requirements. IDC estimates that 60% 0f corporate data consists of copies of original data, and that on average, organizations have 13 copies of each data object. Managing the life cycle of copies is a major challenge. Unmanaged or orphan copies of data can be a liability. Hitachi Vantara’s Hitachi Data Instance Director provides a copy data management platform that simplifies creating and managing policy-based workflows that support business functions with controlled copies of data. Integration with Hitachi Data Instance Director is a key advantage of Hitachi Vantara’s GAD.

 

HDID with GAD.png

Hitachi’s Global Active Device is the leader in active-active, continuous availability. Other storage vendors may claim similar replication capabilities. Since they all rely on synchronous replication over metro distances there is not much difference in performance. However, there is a great deal of difference in simplicity, scalability, and ROI.

 

  • Simplicity. Hitachi’s implementation is simple in that it is an extension of the virtualization capability of our Storage Virtualization Operating System RF (SVOS RF). There are no additional appliances required to virtualize the pair of devices or additional software required in the host servers or virtual machines. No BIN files are required to set the configuration. Our virtualization also provides the most choices for selection of a quorum device. Integration with Hitachi Data Instance Director makes it easy to manage the interactions with other data copy technologies and application requirements.

 

  • Scalability. The way we have stream-lined the synchronous replication of the GAD pairs and preserved the SCSI reserves provides greater scalability than competitive implementations. Competitive solutions that require the use of appliances scale by adding more appliances and this also adds more complexity. Hitachi GAD provides true active-active processing of shared data, across the storage arrays. This scales beyond other implementations where only one storage array is active while the other is used for standby. The controllers in each VSP storage array are also active-active which gives us greater scalability than other vendor’s controllers that are Active/Passive or ALUA (Asymmetric Logical Unit Access). (see my previous blog on the differences in storage array controllers)

 

  • ROI. The return on your investment is higher with GAD due to our overall virtualization capability to virtualize external storage and create virtual storage machines. You can virtualize your existing storage behind a VSP array and create a virtual storage machine with another VSP over metro distances. Neither VSP in this scenario requires any capacity from a Hitachi storage system, all the capacity can be from virtualized 3rdparty storage systems. All VSP systems, from midrange to high-end enterprise support GAD, so you don’t need to have different solutions and different management tools for midrange, entry enterprise or high-end enterprise. No additional appliances or host software is required to support GAD. GAD also integrates well with other copy requirements such as HUR and thin images since it is a feature in SVSO RF. The use of Hitachi Instance director with GAD also reduces the cost of copy management and ensures that resources are not wasted on copies that are no longer needed.

 

Learn More

 

For more information on Hitachi Vantara’s GAD implementation

High end enterprise storage systems are designed to scale to large capacities, with a large number of host connections while maintaining high performance and availability. That takes an architecture where a host can access any storage port and have direct access to a number of storage controllers which can service I/O requests and process their data services requirements. This takes a great deal of sophisticated technology and only a few vendors can provide such a high end storage system. Hitachi’s, high end, Hybrid Virtual Storage Platform (VSP) G1500 and All Flash VSP F1500 arrays provide this capability which is made possible through a switch architecture which dynamically connects front end ports with controllers and a global cache with backend device controllers.

VSP Family.png

For the midrange user where cost is a key factor and massive scalability is not required, the architecture has to be changed to trade off scalability for reduced cost. However, that trade off often means that some advanced enterprise functions like distance replication and zero data loss recovery would be compromised. With recent advances in Intel multicore technology and PCIe extensions, it is now possible to scale down these systems to reduce costs while still providing these advanced enterprise functions. The recent models of the Hitachi Vantara VSP Hybrid G and All Flash F series storage arrays, now provide, cost efficient, scaled down versions of the G1500 and F1500 Enterprise storage arrays, without any loss in enterprise functionality. This was done by consolidating the VSP controller, cache memory, and interconnects onto a cost optimized, two controller architecture which utilizes Intel’s latest multicore processors with high speed PCIe interconnects. These two controllers are configured as Active/Active controllers, which means that any I/O can access any device through either controller.

 

Midrange storage arrays are configured with dual controllers for availability. However, there is a great deal of difference in how the two controllers are configured. Most are configured as Active/Passive or Asymmetric Logic Unit Access Asymmetric (ALUA). Very few are Active/Active. The configuration of the storage controllers is a key differentiator when it comes to the performance and functionality of the storage system. Here are the differences between the different types of controller configurations.

Active Passive.png

 

ALUA.png

 

Active Active.png

 

The VSP’s Active/Active configuration is made possible through the concatenation of the cache that is attached to each controller. Both controllers work with the same cache image of the storage LUN. This configuration is also known as Active/Active Symmetric since both controllers can process the I/O request versus the prior ALUA asymmetric configuration above. This requires additional routing information that is provided by the Storage Virtualization Operating System that is available in all models of the VSP series. This Active/Active capability provides many benefits. We don’t need to worry about LUN ownership as in the case of asymmetric controllers. This provides the ability to use VMotion for live migration of running virtual machines between servers that don’t have identical primary failover controller assignments, with zero downtime, continuous service availability, and complete transaction integrity. This also provides the ability to load balance across a SAN without the worry of creating a performance issue on the storage. Since the VSP can virtualize external storage, this also makes it possible to process a cache image of a LUN from an externally attached storage system. This ability to virtualize external storage enables the extension of the useful life of legacy storage systems and the non-disruptive migration of data to and from an external systems as I posted in a previous blog.

 

The latest versions of our midrange offerings, the Hybrid VSP G350/G370 and All Flash VSP F350/F370 come in a cost effective 2U of rack space with the enterprise features of our high end and over 1 million IOPS for the VSP F370. All priced and packaged for the midrange.

 

On the higher end, which many refer to as the entry enterprise, we offer the Hybrid VSP G700/G900 and All Flash VSP F700/F900 that can scale to 2.4 million IOPS and comes in 4U of rack space. The differences in the model numbers is based on the number of Intel processors and cores. Both Hybrid and All Flash entry enterprise models have the same dual controller Active/Active design as the midrange models but with many more Intel cores, memory, front end ports, back end ports, and higher internal bandwidths. Here are the specifications for the Entry Enterprise Models.

 

VSP Data Sheet.png

The Hybrid VSP G1500 and All Flash VSP F1500 are included to show the full scale out capability of their switch architecture. The VSP G/F1500 is a multi-controller architecture in which the processors, memory modules, front end ports and back end ports are all connected through an internal non-blocking, cross bar, switch, which enables a LUN to be accessed directly through any port without the overhead of passing control between controllers.

Cross bar switch.png

 

There are other architectures which support more than two controllers, but those types of architectures involve a great deal of overhead and complexity. Without an internal switch architecture, no matter how many controllers you hook together, I/O requests for a LUN still have to be processed through the controller that owns the LUN. So If your I/O request comes in from another controller port, the I/O has to be passed to the controller that owns the LUN, creating more overhead and complexity with each additional controller that is added. Having two controllers adds redundancy but adding more controllers to a two controller architecture, can create less availability since the cumulative failure rate increase with each added controller. Having two controllers fail independently is rare, and a two controller failure is usually due to a base failure which would affect all the controllers no matter how many you have.

Four controller.png

 

 

In the case of the VSP G/F1500 the switch architecture would allow the controllers, cache, and LUNs to be assigned dynamically and fail independently.

 

The VSP family of storage arrays, provides a choice of cost optimized configurations from midrange to high end scalable enterprise systems, all running the same software and providing the same functionality to help our customers preserve their investments in policies and procedures and leverage their ecosystem-related investments. Although there is a difference in architectures we are able to simulate the architectural differences in software so that all the models have the same functionality even when it is scaled down to the midrange price and packaging. Our dual controller architectures are fully Active/Active which differentiates us from many other midrange and entry enterprise systems.

In my last blog post I explained how Hitachi Vantara’s All Flash F series and Hybrid G series Virtual Storage Platform (VSP) Systems can democratize storage services across midrange, high end, and mainframe storage configurations. Midrange customers can enjoy all the high end functions that are normally only available on high end systems since Hitachi’s VSP platform is powered by one Storage Virtualization Operating System RF (SVOS RF).

VSP Family.png

In that post I shared the CRN interview with Dell’s Vice Chairman Jeff Clarke, where he announced their plan to consolidate four disparate midrange products and focus on one. That plan immediately drew questions from other vendors as to how they would manage the migration when they transition from the old to the new. This plan was also limited to midrange products and did not extend to high end where features like active/active metro clustering (VPLEX) and SRDF replication are only available on their VMAX and PowerMax systems. SRDF replication is used to provide non disruptive migration for VMAX systems, but is not available on their midrange products.

In other words, their storage plans do not go far enough and do not address the key question of migration for their midrange storage. Hitachi Vantara can provide a solution for EMC Dell users by virtualizing their current EMC Dell storage arrays and democratize the use of Hitachi VSP high end enterprise storage services, making them available to all EMC DELL users, whether they are midrange, high end or mainframe. This capability is not limited to EMC Dell but is applicable to any vendor’s fibre channel or iSCSI attached storage arrays. As the name implies, the signature feature of our VSP is virtualization. Our approach to storage virtualization is unique in the industry since we do the virtualization in our Virtual Storage Platform (VSP) controllers with a Storage Virtualization Operating System RF (SVOS RF). Hitachi storage virtualization can greatly simplify storage management, particularly when used to consolidate and virtualize arrays from multiple storage vendors or multiple storage array architectures from the same vendor. We announced storage virtualization in 2004 with our Universal Storage Platform (USP). Hitachi was one of the first to announce storage virtualization and has carried that forward in our latest VSP Hybrid (G Series) and All Flash (F Series) storage systems.

VSP Virtualization.png

While other vendors approached storage virtualization through the use of appliances sitting on the Storage Area Network (SAN), Hitachi’s unique storage architecture enables the virtualization to be done in the storage controller. The advantage of this approach ensures that Hitachi storage virtualization would have the same reliability, availability, scalability, and performance of an enterprise class storage system, rather than the limited capabilities of an appliance sitting on the SAN. Externally attached, fibre channel or iSCSI, heterogeneous storage is presented as a common pool of storage to the VSP controller and benefits from all the Hitachi enterprise features of the VSP such as Active/Active Metro Clustering, three data center disaster recovery, predictive analytics, and automated management of common workflows. Other vendors’ midrange and enterprise storage, which lack these features, are immediately upgraded with these services when they attached to the VSP. Our approach to storage virtualization is a perfect complement for server virtualization and we were the first storage virtualization vendor to certify with VMware.

 

Virtualized third party storage can also benefit from VSP’s Storage Plug-in for Containers. Containers are a new form of lightweight and portable operating system virtualization without the overhead of an operating system image. All the necessary executables are inside a container; binary code, libraries, and configuration files to run microservices and larger applications. Containers were designed to be stateless which means that its disk files go away when the container goes away. In order to persist the data and make it easier to manage storage, Docker introduced a set of commands for volume management. This enables storage vendors to create plugins to expose their storage functionality into the Docker ecosystem. Hitachi Storage Plug-in for Containers lets you create containers and run stateful applications by using Hitachi VSP series volumes, including externally virtualized volumes, as dynamically provisioned persistent volumes. Hitachi Storage Plug-in for Containers utilizes built-in high- availability to enable a Docker swarm manager or a Kubernetes master node to orchestrate storage tasks between hosts in a cluster. Storage Plug-in for Containers can also be used in non-clustered environments.

 

Since external storage uses the licensed services of the host VSP, there is no need to have separate licenses for the external storage. This not only reduces license fees, but also reduces the operational cost of managing features and licenses on the attached storage systems. Virtualization can further reduce costs by extending the life of legacy systems that have already been capitalized or reduce costs by non-disruptively migrating systems when a refresh is required.

 

According to CIO.com legacy systems are the Achilles heel for digital transformation. “Unfortunately, many businesses today are hard-coded in technology investments of the past: legacy systems – containing the data that they need to make decisions today and drive insights. Now more than ever, reliance on legacy systems is one of the biggest hurdles in the digital transformation journey.”

 

When it comes to upgrading legacy infrastructure, the biggest challenge is the upgrade of storage systems. Unlike compute and network infrastructure, storage is stateful, because it has data that must be moved from the old to the new storage system. This movement of data takes time and as the amount of data continues to increase, the data migration extends into days, weeks, and even months. The least disruptive and easiest way to move data is through storage virtualization. Virtualization of legacy storage ensures that applications can continue to access the data while it is being moved in the background. The VSP's bidirectional FC and sSCSI ports allow simultaneous host and external storage connectivity without the need for dedicated "eports" and host ports. Logical device migrations are able to proceed while the original logical device remains online to the hosts and the VSP will seamlessly switch over the mapping from the source logical device to the target logical device when completed. No changes to host mapping are required.

 

Appliance based virtualization systems, can move data in the back ground, but performance, reliability, and scalability are limited by the capabilities and resources of the appliance compared to a fully functional enterprise virtualization engine with its large cache and large number of bi-directional ports. Utilizing an enterprise storage controller also leverages the built-in security features of the controller and management interfaces such as antivirus, role based access for secure management, authentication and authorization, audit logging, and directory services.

 

So, if your storage vendor is planning to consolidate their storage platforms without a plan for migration, or you are struggling with a number of legacy storage platforms and would like to upgrade to the latest enterprise capabilities without ripping and replacing your storage estate, please consider using Hitachi’s VSP storage platform to help you with the migration and consolidation. Our VSP systems can come without any internal storage attached in the event that you already have enough capacity in your legacy storage and you want to virtualize and extend the useful life of your current storage, or you want to use it as a sling box to move data across storage systems. However, the biggest benefit would be to leverage all the capabilities that are available in the VSP/SVOS RF

Forbes believes it is an imperative for CIOs to view cloud computing as a critical element of their competitiveness. Cloud-based spending will reach 60% of all IT infrastructure and 60-70% of all software, services, and technology spending by 2020. In 2019, CIOs will have to optimize applications of the newest cloud technology in response to their requirements.

rean.png

Forbes notes that a full transition to the cloud has proved more challenging than anticipated and many companies will use hybrid cloud solutions to transition to the cloud at their own pace and at a lower risk and cost. This will be a blend of private and public hyperscale clouds like AWS, Azure, and Google Cloud Platform. CIOs will rely upon migration assessment and planning activities to identify an optimal allocation of workloads across public and private cloud environments. Private clouds are not simply existing data centers running virtualized, legacy workloads. They require highly modernized digital application and service environments running on true cloud platforms like Hitachi Enterprise Cloud. Hybrid clouds must bond together the two clouds through fundamental technology, which will enable the transfer of data and applications.


While public cloud vendors offer infrastructure as a service (IaaS,) which delivers compute, storage and network resources in a self-service, highly automated fashion and platform as a service (PaaS), such services do not completely eliminate the need for IT operations management. Customers still need expertise to choose the right service elements and to configure them appropriately, and they retain responsibility for the guest OS, middleware and applications that run on their IaaS compute instances. Public cloud also introduces new challenges in governance, financial management and integration.


Customers look to third parties for transitioning to public cloud, due to lack of expertise or staffing. Engagements may be on a short-term tactical basis or as part of a long-term managed service. As a result, an ecosystem of managed and professional service providers has developed to provide Managed Service Providers (MSP) for Public Cloud. This business has grown to the extent that Gartner has developed a Magic Quadrant for MSPs that offer managed services and professional services related to infrastructure and platform operations for one or more hyperscale integrated IaaS+PaaS providers. The term “hyperscale” is used by Gartner to refer to Amazon Web Services, Microsoft Azure, and Google Cloud Platform.


One of the vendors that has been on the Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide since its first release in March 2017, is REAN Cloud. As we recently announced, Hitachi Vantara has finalized the acquisition of REAN Cloud to directly complement the strength of our Hitachi Enterprise Cloud (HEC) and on-premises deployments. REAN Cloud is a global cloud systems integrator, managed services provider and solutions developer of cloud-native applications across big data, machine learning and emerging internet of things (IoT) spaces. Through this acquisition, Hitachi Vantara gains critical capabilities and industry-leading expertise in cloud migration and modernization, instantly elevating its cloud offering portfolio and managed services capabilities.

 

REAN Cloud has expertise working with the hyperscale public clouds. They are a Premier Consulting Partner in the Amazon Web Services (AWS) Partner Network (APN) and a Microsoft Azure Silver Partner. REAN Cloud offers managed services and solutions for hyperscale-integrated IaaS and PaaS providers and is one the few systems integrators capable of supporting the entire cloud services life cycle. Backed by extensive security DNA and deep compliance IP and expertise, REAN Cloud specializes in helping enterprise customers that operate in highly regulated environments – Financial Services, Healthcare/Life Sciences, Education and the Public Sector – accelerate their cloud investments while extracting maximum value from use of the cloud itself.


Last year REAN Cloud acquired 47Lining to provide deep capabilities in cloud-based analytics and machine learning that expands Hitachi Vantara’s ability to maximize data-driven value for vertical IoT solutions. This April, 47Lining, announced its Amazon Web Services (AWS) Industrial Time Series Data Connector Quick Start . The Connector Quick Start allows companies to quickly and easily synchronize their industrial time series data to AWS so they can perform advanced predictive and historic analytics using the full breadth of AWS big data and partner services.


The REAN Cloud team will be joining Hitachi Vantara’s Global Services organization, which is led by Bobby Soni, chief solutions and services officer. In July when we announced our intent to acquire REAN Cloud, Bobby posted this blog which gives his view on how REAN Cloud will be integrated into our roadmap to offer our ecosystem of partners and customers a secure and unified multi-cloud platform. What really excites Bobby about REAN Cloud is its people as we can see in this quote from his blog post:

 

The REAN Cloud team is highly talented, passionate about cloud and big data engineering, and are clearly at the top of their fields. Data scientists, DevOps engineers, big data consultants, cloud architects, AppDev engineers, and many more – all of them smart and collaborative. We are all thrilled to welcome them to our own team of talented professionals. Together, we will continue to push the envelope by creating unique and insightful analytics and IoT solutions in collaboration with our partners and customers.”

 

Hitachi Vantara's cloud solutions offer unified cloud management for workloads across public, private and hybrid cloud environments. By combining REAN Cloud's expertise in public cloud and intelligent data governance with Hitachi Vantara’s robust global delivery ecosystem, Hitachi Vantara will now be better positioned to meet the growing infrastructure and analytics-based needs of its customers as they transition to the cloud.

democratization.png

Since the acquisition of EMC by Dell in 2016, the Dell/EMC product line has been fragmented, with VNX, VNXe, EqualLogic, Compellent, Isilon, SC Series, XtremIO, Unity, VMAX, VMAX3, PowerMax, etc. In May, CRN reported that Dell/EMC ‘s “new storage strategy will have Dell engineering teams focusing squarely on one primary storage product line for each market segment: high end, midrange, low-end, and a separate product for the unstructured file and object storage market.”

In a May 21 interview with CRN Dell Vice Chairman Jeff Clarke unveiled the midrange roadmap, which was to breakdown silos and consolidate four midrange products and concentrate on one. That was the extent of the “Roadmap”. This was critiqued by HPE’s Phil Davis who wondered which of these architectures would survive, and what this will likely mean for customers in this CRN article. Jeff Clarke responded in another CRN article saying "Despite what may be being said by others, our products on the roadmap today are supported throughout their lifetime period and we will provide a non-disruptive migration from the old product to the new product," Pure Storage responded to this with a blog by Ken Steinhardt which questioned how the many current products would be supported and how “non-disruptively migration would be accomplished.

In Jeff Clarke’s response to his critics in the CRN article above, he said:

“When I talked to customers and said, 'Would you rather buy four midrange products or one, and the right one, with all of our best IP?' The answer is overwhelmingly positive to the degree of 100 percent that, 'We'd prefer to buy one product that's the right product with all of the Dell EMC technology and IP in it,'"

With all this back and forth generating a lot of internet chatter, Hitachi Vantara has been sitting on the sidelines watching this debate. Having one product for midrange is a no brainer, but it has taken Dell EMC several years to realize the need, and it will take several more years for them to implement this for the midrange. Having one product for midrange does not go far enough. Hitachi Vantara believes that even more benefits can be realized with one platform not only for the midrange but for all three market segments together, high-end and mainframe as well as midrange. In their 2017 Magic Quadrant for general purpose disk arrays, Gartner acknowledged that sharing a common architecture and management tools from the smallest VSP G200 to the flagship VSP G1500 preserves customer investments in policies and procedures, and leverages ecosystem-related investments. Dell EMC has acquired so many different storage products that it will be very difficult for them to integrate them without a major disruption to existing customers. Dell EMC is not alone. All major storage vendors, including IBM and HPE, have multiple storage platforms that cannot be integrated for operational efficiency. This creates complexity, limits agility, and the risk of stranding some products without an upgrade path or abandoning some market segments completely.

Unlike other storage vendors who have had to acquire a good portion of their storage architectures and technologies and are struggling to integrate them, Hitachi Vantara, has had the benefit of a strong R&D parent with over 4,000 patents in storage technology, developing our own unique architecture. The unique architecture, in our Virtual Storage Platform (VSP), has enabled us to provide not only a common storage platform for our midrange customers, but also for our high-end and mainframe customers. The VSP platform is powered by the Storage Virtualization Operating System (SVOS) which enables common management and capabilities across the VSP family of open-system, cloud, containerization, and mainframe solutions. Even when you need to add different storage personalities for the unstructured file and object storage market, Hitachi’s strategy is designed to let you consolidate data on a single platform. Object (HCP) and file (HNAS) can be deployed as gateways that use the same VSP / SVOS on the backend.

VSP Family.png

The physical packaging of the different models is optimized for their use case. For instance, the hybrid flash (VSP G series) and all flash VSP models (VSP F series) for the midrange are packaged in a cost optimized 2U controller while the controller for the VSP for mainframes is packaged in a 10U controller chassis that leverages blades for easy upgrades of compute capabilities and connectivity for massive scalability as well as mainframe FICON connections. However, the SVOS operating system remains the same. What this means is that customers with a midrange VSP can utilize functions like Global-active device for zero RTO (Recovery Time Objective) and zero RPO (Recovery Point objective) metro clustering and add remote replication for 3 data center protection which other vendors can only provide in their high-end systems, if at all. Support for containers to allow DevOps to accelerate cloud-native application development, virtualization of externally attached storage for non-disruptive device migrations, and a 100% data availability guarantee are available across our VSP/SVOS platforms. Our latest version of our storage virtualization operating system SVOS RF which was released last year enables us to build the SVOS microcode so that a “single package” could be used on multiple VSP’s.

From a value standpoint, having a common SVOS platform avoids the technical debt incurred by having multiple platforms, stranded storage on different systems, management & service complexity, and the overhead of managing different code upgrade cycles. The fact that data services are common across all offerings enables the “democratization” of storage services. A midrange user now has access to the same, super-powerful features as the biggest banks. Or, as a large company, you can deploy the same super-powerful features in remote, edge offices as you do in the core. This helps for data protection because you can easily use a single offering to replicate data back to the core. It also helps with best practices because you can use the same run books for deploying storage OR use the same Hitachi Automation Director policies everywhere. This makes global automation easier to roll out and helps de-risk operations anywhere in the world for better uptime and superior customer experiences.

Hitachi Vantara is the only storage vendor that provides all of the Hitachi technology and IP for not only our high end and mainframe customers, but also for our midrange customers as well. We democratize storage services so that all can enjoy the benefits of our proven technology. You don’t have to wait two or three years to see a roadmap evolve. It is here today with Hitachi Vantara. For information on Hitachi Vantara’s Storage portfolio visit our website at this link

GUIDE SHARE EUROPE is holding their 2018 GSE UK Conferencewhich will take place at Whittlebury Hall on November 5th, 6th and 7th 2018, in Whittlebury, UK. This conference will provide 3 days of intensive education across a broad range of mainframe topics.

 

whittlebury.png

 

GUIDE SHARE EUROPE, GSE for short is a European user group for IBM mainframe hardware and software. Guide was an international user group for IBM, founded in 1956. It grew to be the largest IBM user group until the 1990’s. In 1994 GUIDE in Europe merged with SHARE, another large IBM use group and became GUIDE SHARE Europe. In the US, GUIDE ceased to exist and many of its activities and projects were taken over by SHARE. GSE like Share is primarily focused on IBM Mainframes and has more than 1300 organizations in Europe, which shows that mainframes still have a large presence there. GSE assists its members in exchanging experience and information, assessing developments in information technology, and providing them with opportunities to further develop and influence IBM's product policy.

 

SHARE is a volunteer-run user group for IBM mainframe computers that was founded in 1955 and is still active today providing, education, professional networking, and industry influence on the direction of mainframe development. SHARE member say that SHARE is not an acronym, it is what they do. SHARE and GUIDE were the precursors of the open source communities that we have today.

 

The mainframe market is alive and well and may be on the verge of a renaissance in the coming IoT age. We have all seen the staggering projections for 30+ billion new internet connected devices and a global market value of $7.1 trillion by 2020. That is almost 8 times the estimated 4 billion smartphones, tablets, and notebooks connected today. That translates into a staggering amount of additional new transactions and data, which means compute and data access cycles, as well as storage. That many new devices connected to the internet also opens up many more security exposures.

 

These are areas where the mainframes excel with their unique architecture of central processing units (CPUs) and channel processors that provide an independent data and control path between I/O devices and memory. With z/OS, the mainframe operating system, is a share everything runtime environment that gets work done by dividing it into pieces and giving portions of the job to various system components and subsystems that function independently. Security, scalability, and reliability are the key criterions that differentiate the mainframe; and are the main reasons why mainframes are still in use today especially in high transaction high security environments like core banking. These same capabilities will be required by the backend systems that support IoT.

 

Hitachi Vantara is one of the few storage vendors that support mainframes with its scalable VSP enterprise systems. Our own Ros Schulman will be presenting two sessions:

 

Virtual Storage Platform Update - Latest and Greatest: November 5 at 13:00.

In this session Ros will discuss the latest updates to the Hitachi Vantara G1500 and F1500 Storage subsystems. This update will include hardware, software, and solutions as well as options for moving data to the cloud.

 

Data Resilience and Customer Solutions for the Mainframe in the IoT Age: November 5 at 1530

Ros will discuss effects of the IoT age on mainframes, the importance of data resilience and analytics and Hitachi Vantara IoT solutions available today for mainframe environments, and where we may be headed in the future.

 

For other sessions, please see the link to this Agenda