Skip navigation

Business transformation is all about re-imagining customer experience, business processes, operations and business models. In extreme cases, entire industries are being disrupted. We’ve witnessed first-hand how digital giants Uber and Lyft have upset transportation, AirBnB has turned the hotel industry on its head and our retail experience is forever changed by Amazon. These are examples of momentous changes, yet all around us incremental “digital transformations” are underway, fueled by automation. IDC provides examples like automated toll collection, grocery self-checkout and automated pizza delivery, which may not seem big, but collectively incremental changes such as these are transforming the way we live, work and play. Behind changes big and small are the developers of the world whom have become value multipliers for the business. And it doesn’t stop with developers. In a new and powerful partnership, IT operators accelerate the value delivered by developers and multiply it yet again, speeding delivery of application-based services to market. Lines-of-business are partnering with IT to identify technologies with the potential to transform the way they serve their customers, architect their processes, and transform traditional products to services-based offerings. These are the super powers of DevOps.



According to IDC, about 51% of large organizations use DevOps. Devs build their apps and port them into Platform-as-a-Service (PaaS) where they extract value from intelligence and the ability to abstract and mobilize workloads among multiple, underlying cloud types. PaaS allows for building and re-platforming applications into a modern model with portability and modularity. PaaS enables packaging apps for easy consumption, leveraging containers, microservices and “function-as-a-service” wherein devs utilize pre-built functions for use in their apps instead of “re-inventing the functionality wheel” with each app they write.


“As demand for applications increases, the main business driver for PaaS solutions is the agility gained with which a developer can take a concept and deliver value to the user.” - IDC


Red Hat OpenShift is an open source PaaS built on containers (Docker, CoreOS) and orchestrated via Kubernetes. Most enterprises have tens to hundreds of apps running their business. Red Hat OpenShift is interesting to enterprises in large part because it supports both cloud-native, stateless apps and traditional stateful apps, acting as a bridge between these worlds (mode 1 and mode 2 in Gartner speak.)




Developers need data to feed their apps, making access to multiple sources of data a critical component for building modern applications. Data comes from everywhere; traditional databases, local files, cloud, machines, web portals and social media. A survey of 400 execs in the financial services industry found that 49% claim to have more than 10 internal and external data sources that are relevant for their business processes (Kofax, 2017). Thus, data services are an essential extension to PaaS.


Hitachi Vantara are experts in data services for enterprises, providing data integration, cleansing, streaming, replication, synchronization and data pipeline optimization. Hitachi partners with Red Hat to offer data services for Red Hat OpenShift as well as Red Hat for OpenStack environments. Hitachi Unified Compute Platform converged infrastructure leverages Hitachi Storage Plug-in for Containers and allows persistent storage utilizing Virtual Storage Platform systems to be orchestrated with Kubernetes and Docker Swarm in support of stateful applications development.


Sound interesting? Learn more in our recent press release and meet with Hitachi Vantara and Red Hat experts at the Red Hat Summit in San Francisco May 8 – 10, 2018.


News and Events

Solution and Product Forums

Hitachi Developer Network

Converged Systems

Storage Systems


Hitachi Solutions for SAP



The Business Value of Red Hat OpenShift

Developers in the Drivers Seat: The New DX Power Brokers – IDC Directions, 2018

Businesses are becoming customer-obsessed, developing new business models and innovating new business processes with the goal of unlocking new revenue opportunities and transforming into leaner, digital versions of themselves (often using intellectual property already owned). To do this, they must change the lens through which they view their industries, partners, competitors -- their world views. The price for inaction is to be marginalized or worse yet, made wholly irrelevant by new entrants  with Internet of Things (IoT) and data-driven insights in hand.World_Earth19886529741_dcb2c0ecae_z.jpg


Here are FIVE WAYS to become IoT-ready and relevant.


1. Go big or go home. According to Harvard Business Review, and their “sensemaking” process, to divine your customer’s true experience, is to “approach the research without hypothesis, gathering large quantities of information in an open-ended way, with no preconceptions about what they will find.”[1]  The point here is you are not solving a known problem, so your usual sources of data and traditional market analysis tools will not yield the insights you are seeking. You will need to experiment with machine learning, artificial intelligence and analytics directly applied against your own large datasets, and compare and contrast against outside data sources as well.


2. Be in it for the long haul. The Internet of Things Business Index 2017, from The Economist, states that IoT from 2013 to 2016 has not advanced too far beyond research and planning, and that the most progress has been in improvements to external products and services. The survey further suggests limited progress using IoT to monitor and measure internal operations. This may imply that the top priority has been placed on improving the customer experience, prioritizing increases in top line revenue over cost savings and efficiencies. Case in point, major, multi-year initiatives like the Intercity Express Programme (IEP) have entered into an innovative train-as-service partnership with Hitachi Rail Europe. Hitachi retains ownership of its trains and their maintenance and Network Rail pays for on-time service. According to IoT UK, “Hitachi has integrated a network of sensors into its trains that will collect a massive amount of data on both the trains themselves and their journeys, such as location, speed and power consumption.” Hitachi uses analytics to monitor trains for potential issues before they become problems and also to do preventative maintenance to ensure trains are in top condition, and that they are where they need to be, when they are needed. The train system has just completed its first Edinburgh test journey, reducing travel time by 22 minutes. This new IoT-enabled business model benefits ridership with more reliable, safer and faster transport and it benefits train operators – for example -- they no longer need duplicate trains in case of breakdown -- indicating a stick with it mentality leads to long-term payoffs all around.


3. Recognize IoT isn’t all business.  It’s also about improving society, including public safety. Examples like Hurricane Harvey emergency response are bellwethers of IoT progress. IoT can help us stay out of harm’s way, and if we are already in need of rescue, IoT technology can help get emergency aid to us faster. According to Forbes, “Even before Harvey made landfall, organizations such as NASA, NOAA, and municipalities were using sensor data, surveillance and satellite imagery to predict not just where the storm was likely to impact, but also coordinate with first responders and law enforcement.”  Think about how your IoT innovations might make the world a safer and better place.


4. Look for low hanging fruit. In one online IoT class, an MIT CSAIL professor advises students to start small and be patient. Students are further advised to choose something with a clear value proposition (as in something that saves time and/or money) and one that addresses a fail safe application. We are reminded to do controls, observe and measure –  and once again – be patient and search for insights. @MIT_CSAIL


5. Don’t try to do it all yourself. According to the business dictionary, “Co-creation allows and encourages a more active involvement from the customer to create a value rich experience.” I would add that it helps foster ideas from one end of the value chain to the other, not to mention, the likelihood for positive outcomes increases when all parties have some skin in the game. Likewise, in readying your IT for IoT, It probably doesn’t make sense for you to construct bespoke infrastructure to support each IoT deployment, because it can be complex. From manufacturing floors to solar farms to water treatment plants, sensors  and analysis of collected data is helping organizations optimize their physical assets. The expected result is greater performance and output, efficiency, efficacy and extended life of capital equipment.  Sometimes assets are located where it’s not simple to deploy IoT on public cloud.  New to the market, Hitachi IoT Appliance simplifies IoT deployments in any number of industrial settings and can be placed at a location of your choosing from edge to core where data privacy, compliance, network limitations and other challenges can be more easily addressed. Hitachi IoT Appliance allows you to keep your focus where you can drive the most value for your business and your customers, not on IT set-up. Hitachi IoT Appliance is a consumption model for Lumada IoT Platform software. The Appliance is pre-loaded with Lumada IoT Platform software, pre-configured and pre-tested at the factory. The Appliance is built upon a hyperconverged, microservices-based architecture, with support for machine learning and analytics, meaning that Lumada software applications are delivered as microservices with compute and physical storage merged in a single system.




McKinsey said more than two years ago, “hype may actually understate the full potential of the Internet of Things” As more is learned; it seems the biggest and most economically impactful breakthroughs may come from the industrial sectors. Predictive maintenance, smart manufacturing, smart energy, and many types of operational optimization using augmented reality and artificial intelligence are being used in new, creative and lucrative ways.


The  Hitachi NEXT 2017 conference is showcasing the latest in IoT technology and business solutions this week in Las Vegas where you’ll find Hitachi IoT Appliance, powered by Lumada, on display and in demos and sessions; I hope you can join us. If you can’t join us in person, check out live streaming or catch recorded sessions when you get a chance. Meanwhile, may your IoT data be plentiful and fruitful and may the IoT gods smile upon you.


[1] An Anthropologist Walks into a Bar…, The Leaders Guide to Problem Solving, Harvard Business Review

Lumada Community

Converged Systems

Hitachi Developer Network

Open Source

Solution and Product Forums

Solutions and Products

NEXT 2019

In a 2016 Forbes Global Survey of Top Executives, most respondents identified the top driver for Digital Transformation as business models (41%) and an extremely close second were new technologies (40%).  This tells me it takes both of these for a business to transform. SAP HANA is one of the new technologies which will underpin your digital transformation and help you make data-driven, informed decisions to select and optimize business models.


As a pillar of the digital enterprise, SAP HANA empowers faster and more accurate business decisions by performing real-time analysis from multiple data sources in real time. You want these benefits fast; yet deploying SAP HANA takes time and expertise. Once up and running, you want SAP HANA to deliver on these promises, and that means running it on the right infrastructure.


How do you decide which infrastructure is right? How do you decide whom to partner with for needed expertise when deploying SAP HANA?  Here are critical criteria to consider.



Performance, cost, migration strategies, maintenance, infrastructure and sizing are key considerations as you migrate to SAP HANA. With over 20 years of experience, Hitachi and Oxya experts design, deploy and manage SAP environments for some of the largest global organizations. Our expertise lies in managing, governing, mobilizing, migrating and analyzing data and the infrastructure that supports it.



Hitachi does significant testing, sizing, software and hardware integration and solution development work to ensure our customers can trust our solution for SAP HANA will work as expected. Converged infrastructures have become the fastest way to bring SAP HANA to production.  Hitachi UCP for SAP HANA  is a converged infrastructure that combines software, enterprise-class storage systems, compute, and networking components.  It’s offered as an appliance or TDI (tailored data center infrastructure) and is available in scale-up and scale-out configurations, and sized from starter to massive scale.



Benchmarks can be a useful tool in assessing solutions for SAP HANA. There are many factors to consider, such as analyzing benchmark results by comparing tested configurations among vendors and understanding how these affect benchmark results. One key evaluation criterion with benchmarks is to check the tested configuration against real-world sizing. For example, producing a benchmark result using a large and expensive configuration may produce lower response times, but may not reflect real-world sizing and lowest TCO for your SAP HANA solution. Nonetheless, a vendor whom is transparent and shares benchmark results in the SAP community, and one that appears on the “leader board” is worth consideration. Hitachi recently published an  SAP BW Edition for SAP HANA Benchmark using what we think is a real-world configuration and it produced some impressive results. Hitachi has also published outstanding results using real-world configurations for the SAP BW Advanced Mixed Load Benchmark (BW-AML).



Our customers are our best asset and our best evidence of success. In-memory databases like SAP HANA are algorithm and data-intensive and consume vast computing resources, yet even so, Infosys has reduced SAP HANA infrastructure costs by 250% and power requirements by 95%.  M.buego saw a 100X increase in reports and analytics production and international retailer Spar can access analysis results and current numbers faster than ever before. Canadian-based SpinMaster, the 4th largest toy company globally, has cut batch run times by 50% - from days to hours with their Hitachi solution.


When you need real-time decisions to jump start your digital transformation, consider moving to SAP HANA. Hitachi can take you from a starter UCP for SAP HANA solution and scale as large as you need, giving you the confidence to expand and deploy more SAP HANA modules and applications along the way.


Hitachi Solutions for SAP

Solutions and Products

Partner Communities

News and Events

Solution and Product Forums

HDS Blogs

DockerCon 17 was held in Austin last week just as Docker turned four last month. At the conference, it was evident that Docker is “all in” not only with next gen technology and tools for Devs, but with a greater affinity for the Ops half of the DevOps community. There was an emphasis on security, logging and metrics, scalability, resiliency, consistency and instructions for bringing containerized apps from dev/test to production. Docker Founder and CTO, Solomon Hykes, declared in his keynote “Operators need Docker to just work.”  He went on to introduce a new Secure Linux Subsystem for Docker which he dubbed "Immutable Infrastructure."


Speaking of scalability, resiliency and consistency, this was the year of the enterprise at DockerCon. The conference kicked off on tax day in the USA and Intuit announced on stage they processed 25 million tax returns on their tax software running in Docker containers. Visa shared stats saying 90% of infrastructure is at 15% utilization, and for them Docker on bare metal came to the rescue, with Visa processing 100k transactions per day on a Docker-based platform.  ADP says they are running 3,771 Docker containers across 7,500 deployments. In their session, ADP said their Devs were itching to use Docker and engage with Ops and advised others looking to transform and embrace DevOps to "work to disrupt yourself." Bottom line, enterprises are beginning to trust and adopt containers for more than test/dev and they are adopting containers not only for their cloud native apps, but in some cases for traditional apps. One ADP slide advised “Encapsulate, isolate, expose function via API,” speaking to the idea that traditional apps can gain automation pipeline benefits via containers and container cluster management. MetLife explained their strategy for the journey from traditional apps to microservices as "wrap, tap & scrap" as in wrap in a container, tap for critical data and scrap the monolithic app.


Recall in March, Docker announced Docker Enterprise Edition and renamed the “free” Docker products as Docker Community Edition and rolled out certification programs and Docker Store. DockerCon 17 added a punctuation mark to these announcements, demonstrating that Docker technology has the magnetism to bring together a growing ecosystem for the DevOps movement, not just developers. This was evident even in the wide array of stickers with logos, well-known and startups, with offerings for Devs and Ops, enterprises and cloud natives alike.


IDC commented via Twitter that “renewed focus on developers is the new GTM strategy” for vendors. One such example came from Oracle, who held a session around their Oracle Cloud Container Service (OCCS) (the session was delivered by Oracle team members from Oracle’s StackEngine acquisition). They have made Docker container service template(s) for managing containers in Oracle Cloud available from the Docker Store.


In their Jan 2017 report, “Answering the 10 Biggest Questions About Containers, Microservices and Docker” Gartner expresses both optimism and caution around containers. The report notes that containers avoid the performance tax associated with VMs, containers can support high tenant densities on hosts and Devs can include app dependencies with the container. Some of the cautions noted are around the number of moving parts in a microservices architecture and that advantages for monolithic apps may be limited if workloads require persistence and don’t benefit from horizontal scaling.


I left DockerCon 17 duly impressed, especially with the use cases presented by the big enterprises using container-based platforms to take workloads to production. Thanks DockerCon 17 for an interesting and educational conference.  Now go find your shades and your flip flops, because I hear next year DockerCon is coming to San Francisco!


Hitachi Solutions for VMware

Hitachi Developer Network

Solution and Product Forums

Hitachi Solutions for Oracle

News and Events

Innovation Center

New generations of business applications centered on the value that can be derived from data are powering digital transformation. These applications use algorithms under the covers to advise us and to help us make better decisions -- and increasingly – to literally run the business. Algorithms underpin getting people from point A to point B (Uber; Kayak; Waze); they suggest the fastest way to procure everything and anything (Alibaba, Amazon); where to find a job (LinkedIn), where to find new customers (Marketo) and even where to find friends (Facebook).


Contrast these with traditional enterprise business applications, as Wing VC Co-Founder Peter Wagner puts it in his essay titled “Why Data-First Applications Will Come to Rule Enterprise Software,” that have business-logic at their core. These applications were programmed to do what the business decided they should do. Their value is based on driving efficiencies and automating best practices learned over decades of trial and error, essentially scaling and accelerating the human workforce (ERP, SCM, CRM). These powerhouse applications have delivered proven, stalwart value over time, yet their relational architecture can be a stumbling block to taking advantage of algorithms for modern analytics.


Data-value-driven analytics applications are empowering line of business users outside of the centralized IT construct to perform complex analysis and become arm-chair (or rather office chair) data scientists, using their newly minted scientist superpowers to uncover new opportunities and drive incremental business value to fuel digital transformation.  Supplying the magic behind these superpowers are data scientists working behind the scenes with application developers to embed algorithms along with their own expertise into analytics applications.BusinessWomanLookingatDataonScreenistock-000089292615-texture-low.jpg


Analytics applications are data and algorithm-intensive and as such they can consume vast computing resources.  Analytics applications will benefit from new(ish) types of computing architectures and technologies like in-memory computing (IMC).  With IMC, the application assumes all data required for processing is available from main memory and provides processing power to do analytics on very large datasets – as Gartner says, in their report Invisible Advanced Analytics: Coming to a Business Application Near You,“…at a frequency and level of granularity not possible with traditional computing architectures.”



Another technology -- a cross between analytics and transactional processing -- is hybrid transactional/analytical processing or HTAP.  HTAP is designed to eliminate the need for users to switch between transactional and analytics application types and get the benefits of both.  In the same report, Gartner asserts that the lines between transaction processing and analytics will begin to blur when these two technologies are combined.


Yet another complementary technology being incorporated in analytics applications is Hadoop, which can act as a repository to support analytics with external data, owing to its ability to ingest data without a predefined schema.


This is an exciting time to be at Hitachi if you’re interested in analytics, data-value-driven applications and the compute infrastructures that run them. Hitachi group companies are into this space big time. Pentaho integrates multiple data types to create agile, extensible data sources, Hitachi Insight Group develops Internet of Things (IoT) solutions and Hitachi Data Systems offers converged and hyperconverged systems, flash storage, object storage, cloud and integrated solutions to run applications, including the most data-intensive and advanced analytics and databases from SAP, Oracle, Pentaho and many more.


Innovation Center

Hitachi Solutions for Oracle

Hitachi Solutions for SAP

Converged Systems

Storage Systems

Hitachi Solutions for VMware

Solution and Product Forums

Solutions and Products

News and Events

I sometimes skip the preface of a book, because I can’t wait to get right to the core content. I’m glad I didn’t take this approach with “The Digital Transformation Playbook” by David L. Rogers, director of Columbia Business School’s executive education programs on Digital Business Strategy and Digital Marketing. He gets right to the point when he writes “Transforming in the digital age requires your business to upgrade its strategic mindset much more that its IT infrastructure.” In chapter one, he explains the “Five Domains of Digital Transformation: Customers, Competition, Data, Innovation and Value.” Rogers says he advises centuries-old multinational firms, today’s digital titans and brand-new start-ups. He goes on to explain that the same strategic principles for digital transformation apply to each of these organization types. He writes “But the path to implementing these principles is different, depending on the point from which one starts.” That makes a lot of sense to me. Circling back to Rogers’s point that it takes much more than IT infrastructure, as I read further I understand it to mean an organization cannot achieve digital transformation goals on IT infrastructure changes alone and an organization can’t begin to make impactful changes to its IT infrastructure in support of digital transformation until it has determined its digitial transformation business strategy and has a plan for addressing the five domains Rogers outlines.


Those with a digital transformation strategy and plan in hand are starting to think about changes to their IT infrastructures to support their plans. For mature businesses this likely means supporting IT infrastructure in multiple modes to keep today’s applications going while developing the applications of tomorrow. Agility is key, and one option most are considering is cloud, be it public, private or hybrid.Guylookingattwoscreensanalytics-547015753-low.jpg


If you are a cloud service provider or systems integrator, your customers seek you out based on your ability to deliver IT services that match well with their strategic direction and goals.  A great many enterprises find their starting point for infrastructure in light of digital transformation initiatives is one where they are responsible for both traditional systems of record and systems of innovation.  Greater agility is desired, yet resiliency – even in the digital economy – continues to be paramount. For this reason, you will want to offer cloud services backed by IT infrastructure and IT management automation that successfully merges IT agility with IT resilience. Last week Hitachi made several key announcements, one of which is the Hitachi Management Automation Strategy. Here is a link to the press release.


The Hitachi Management Automation Strategy paves the way to simplified operations and reduced risk through a composable management approach, employing automation, orchestration, analytics, and intelligent abstraction at cloud-scale. The strategy puts Hitachi customers, as well as our cloud partners, on the path to management automation software that is cloud relevant, meaning that it orchestrates infrastructure and integrates with cloud ecosystems, while automating the repetitious details required to shape infrastructure into consumable services to meet business outcomes and time-to-market metrics for digital transformation.


The strategy is newly announced, yet over time Hitachi has been actively executing on the strategy with flexible integrations into cloud management platforms from VMware and Microsoft, as well as to open source clouds like OpenStack, providing faster access to the right resources, and improving processes and operations. These integrations play a key role in enabling easier consumption of existing and future converged, hyperconverged and storage platforms from Hitachi, while delivering legendary Hitachi resilience. Hitachi Management Automation Strategy is underpinned by an API strategy used to satisfy application requirements using policies and service profiles, decoupling complex operations into small focused tasks communicating with each other via APIs, which are transparent to administrators and end users, dramatically simplifying infrastructure consumption.handsonlaptopkeyboard000025877894_low.jpg


Automation-focused software from Hitachi can speed cloud deployment and time to service, as well as increase operational efficiency and lower risk when managing the Hitachi UCP family of converged and hyperconverged systems and the Hitachi VSP family of storage systems.  For starters, you’ll want to take a closer look at Unified Compute Platform Director (UCP Director) which manages UCP 4000 systems and underpins the newly announced Hitachi Enterprise Cloud with VMware vRealize® Suite, as well as the newly announced Unified Compute Platform Advisor software which manages UCP 2000 systems targeted for midtier environments. Hitachi Automation Director comes with a pre-defined service catalog and customizable templates for quicker delivery of infrastructure resources to provision storage for key business applications and databases, as well as virtualized environments. Downloadable templates, plugins and guides can be found on the Hitachi Developer Network.


Meanwhile, consider joining the world-class network of cloud service providers and systems integrators offering cloud and integration services powered by Hitachi.  For our cloud partners in particular, the Hitachi Management Automation Strategy enhances what it means to be “powered by Hitachi Data Systems.”


Learn more:

Hitachi’s Cloud Service Provider Program

Cloud Service Providers in the HDS Program

Blog: Hitachi Management Automation Strategy Promotes Simplification and IT Agility for Digital Transformation


The specified item was not found.

Hitachi Solutions for VMware

Converged Systems

Solution and Product Forums

Hitachi Automation Director

Storage Systems

Storage Management

Paula Phipps' Blog

Cross-industry market disruption is inspiring CEOs to undertake digital transformation initiatives to stay competitive. Digital transformation requires a modern, agile IT strategy to address continual change, faster access to more data to fuel business insights and decision making, and faster time to market with new business models. Automation and orchestration are essential to driving IT agility to new levels through operations and process improvements.  Hitachi Management Automation Strategy guides Hitachi’s direction for automation, orchestration and management to improve IT agility for modern private cloud and data centers now and into the future.


There is a trend toward composable infrastructure where physical and virtual compute, storage and network resources are treated as services and logically pooled to avoid the need to manually configure resources to specific applications. The intent is to satisfy application requirements for physical and virtual infrastructure using policies and service profiles, leveraging Application Programming Interfaces (APIs) to call services to compose the needed infrastructure.  Composable management software plays a key role in a composable infrastructure solution.


Composable infrastructure management automation and orchestration software must be cloud relevant, meaning that it orchestrates infrastructure, while automating the repetitious details required to shape infrastructure into consumable services to meet business outcomes and time-to-market metrics. In short, modern infrastructure management software must meet the simplification imperative driven by cloud adoption and the motivation to consume ITaaS.Guylookingoutwindowcloud-533979479-low.jpg


Hitachi Management Automation Strategy seeks to simplify operations and reduce risk through a composable management approach, employing automation, orchestration, analytics and intelligent abstraction at scale. This approach can be contrasted with traditional approaches where monolithic management software is deployed that addresses all possible use cases, regardless of the workload at hand, causing unnecessary complexity and too much “managing of the management application.” A key point of the Hitachi Management Automation strategy is to allow IT generalists, domain experts and server virtualization/container administrators – who have been asked to expand their roles beyond the compute domain – to increase their spans of control and to become more efficient and effective in managing modern IT infrastructure.


With the strategy, performance optimization and capacity planning evolve to include environment-wide analytics to inform decisions around how to optimize for specific applications. Imagine software that may eventually recommend which data to keep based on your specific business parameters in a future world where –on a global scale – data created is predicted to dwarf available storage capacity. Decisions based on these analyses could be carried out automatically using pre-defined and dynamic criteria.


Open automation pulls Hitachi infrastructure into cloud ecosystems, making Hitachi solutions easily consumable with flexible integrations into popular cloud management platforms from VMware, Microsoft, ServiceNow and others, as well as open source clouds like OpenStack. The Hitachi Management Automation Strategy calls for increased integration and automation so whatever our customer’s uber orchestration strategy; our solutions will be easily and increasingly automatically consumed, enabling faster access to the right resources, improving processes and operations.


Hitachi Management Automation Strategy is underpinned by an API strategy used to satisfy application requirements using policies and service profiles. APIs call services to compose the needed infrastructure, decoupling complex operations into small focused tasks communicating with each other via APIs that are transparent to administrators and end users, dynamically simplifying infrastructure consumption. Hitachi Management Automation Strategy stipulates ways to:

  • Drive agility to deliver business initiatives and transformation faster with a simplified, composable management approach
  • Speed time to market and increase customer satisfaction with an easy consumption model
  • Reduce risk, increase efficiency and lower operational costs through automation and IT analytics

EmptyRoad7964515618_bf5cda3580_z.jpgTo that end, Hitachi is leveraging the foundation we have established with Hitachi Command Suite, taking best practices learned over years of managing IT infrastructure and automating and modernizing these to meet the needs of delivering ITaaS in bi-modal application environments. Over time, Hitachi has begun releasing more automation-focused management and orchestration software that address infrastructure for both 2nd platform (think systems of record, traditional databases, etc.) and 3rd platform (think analytics, Hadoop, NoSQL databases, etc.) applications. These include Automation Director, UCP Director, Data Instance Director, Infrastructure Analytics Advisor and Storage Advisor.


As part of the Hitachi Management Automation Strategy, today HDS  has added the new Unified Compute Platform Advisor to our growing portfolio of automation-focused software. UCP Advisor delivers simplified, smart converged management for individual and multiple UCP 2000 systems (my colleague Paul Morrissey has written a UCP Advisor Blog here). Moving forward, look for Hitachi to unite our new and recently announced software under a next-generation suite of automation-focused software. Hitachi invites you to join us on this journey as we roll out the Hitachi Management Automation Strategy.


HDS Oct 11, 2016 press release

Boom!- Rolling out UCP Advisor


Converged / Compute

The specified item was not found.

Hitachi Solutions for VMware

Storage Management

Storage Systems

Digital Transformation is about rethinking approaches to business in light of technological advancements surrounding the internet and data explosion. Data analytics, mobile, social, and connected devices are the vehicles for ushering in new business models. At the heart of these new business models is customer experience. Reframing the customer experience is turning long-standing industries into new playing fields.


Case in point, Bloomberg Businessweek reported this week in “Digital Banks Take on The High Street Giants,” that starting a new lender has gotten easier in the UK.  Four so-called neobanks have won licenses in the past year from UK regulators to offer branchless, mobile-phone-based banking. What do these upstarts say they have that the big banks are lacking?  They claim to have the technological edge over the banking stalwarts who must plug their shiny new mobile apps into old back-office systems.


In the Online Guide to  Digital Business Transformation from I-Scoop  “The development of new competencies revolves around the capacities to be more agile, people-oriented, innovative, connected, aligned and efficient with present and future shifts in mind.” Becoming agile is a theme I see repeated in discussions surrounding Digital Transformation. Acknowledging there are a number of ways to improve agility, the one I’m thinking of  is IT simplification. GuywMobilePhonebusiness-306731-low.jpg


The IT simplification imperative seeks to stamp out a major source of the IT delivery bottleneck which is infrastructure complexity, and in particular, IT administration which is time-consuming and error prone. In line with Digital Transformation initiatives, CIOs are keen to reduce reliance on specialized server, network and storage administration and to build closer alignment with lines of business.


Masking complex details and simplifying the use of IT through intelligent software can eliminate the risk and repetition associated with traditional IT management. According to the Gartner Market Guide for Integrated Infrastructure Systems (IIS) Cloud Management Platform (CMP), “I&O leaders will increasingly see vendors invest in dynamic resource-optimization technologies that make recommendations or take actions dynamically to improve the efficiency of the platform and resources.”


Management software from Hitachi supports REST APIs to connect and bring together data and services in a common access method. Hitachi Automation Director software lets you automate common and repetitive storage administration tasks. It packages all your best practices for popular database and VDI environments, as well as analytics and other popular applications into simple, customizable templates for quick, accurate provisioning. Another smart software, Hitachi Storage Advisor provides management abstraction that masks underlying technological complexities, providing a guided experience in which the software recommends proper configuration, provisioning and protection based on knowledge of available resources and specified service level.


These are two examples  in which management software from Hitachi seeks to deliver on the simplification imperative. I’ll share more with you in a future post.  Meanwhile, keep it simple and enjoy your Digital Transformation journey.


Hitachi Automation Director

Digital Transformation and Productivity

IT Automation Plays a Central Role in Digital Transformation

For Digital Transformation, Take the Road to a Software-Defined Infrastructure

Storage Management

Storage Systems

In the Digital Transformation era, every company is a technology company. Nowadays, business types concern themselves with the technology underwriting their business initiatives because it’s one of a handful of key metrics that will define success – or failure. According to MITSloan, key metrics for a successful digital transformation are talent, culture, strategy and leadership. And no surprise, one of the top four leadership attributes identified is technology understanding. There was a time not so long ago when technology understanding wouldn’t even have been in the job description of a Forbes Global 2000 CEO, outside of those at the helms of bonafide tech companies. Theodore Kinni goes further in his recent blog titled “Every Company Is a Tech Company and Tech Is No Longer an Industry,” explaining that some companies are in the tech industry in name only and are forcing constituents of the industries they are disrupting to compete on an uneven playing field – his examples include Uber, Airbnb and Alibaba.DigitalTransformationPeopleinModernOfficeComplexistock-000088073107-texture-low.jpg


To even the playing field, leaders do need to understand technology and they need to hire talent that understands technology.  But more importantly, is the need to understand the intersection of business and technology. This intersection is where digital transformation meets the cloud and the data center.  At VMworld 2016 in Las Vegas, during the general session, VMware CEO Pat Gelsinger prescribed the right balance between freedom and control to deliver technology in a way that makes IT more agile and easier for businesses to consume. He suggested that this balance between freedom and control – a hybrid approach -- is needed to smooth the transition to 2021 when VMware predicts 50% of all workloads will be delivered from a cloud. He went on to say that management, automation and programmability are essential to the way forward.


From my vantage point at Hitachi, I see this too. Management, automation and programmability play a major role in helping our customers become more agile in delivering IT.  Hitachi has used its deep knowledge and best practices to develop provisioning “recipes” which take the form of pre-defined templates which comprise a service catalog for provisioning infrastructure resources to VMware and other cloud ecosystems, including open source clouds like OpenStack as well as for specific business applications like Oracle, SAP and Microsoft.


VMware and Hitachi have worked together for years. Hitachi adds to the VMware value proposition with our automation integrations. A key differentiator Hitachi brings is automated management of virtual and physical together in the VMware cloud ecosystem. Today we do this with Hitachi UCP Director in concert with VMware vSphere and VMware vCenter. At VMworld 2016 Hitachi shared a technology preview of new automation software which will extend Hitachi specific domain information within the VMware ecosystem of tools to midmarket environments. Look for more on this new automation technology soon.robotic_hand_iStock_000001427493XSmall.jpg


Speaking of new, have you checked out the Hitachi Data Systems Developer Network yet? This is the one stop shop for developers and technologists looking to leverage Hitachi in their development, IT architectures and cloud plans. The Hitachi Data Systems Developer Network is a friendly place get answers and share knowledge about HDS solutions and products, discover insights from industry experts, and make connections with others. The community is open to everyone, and anyone can join. Since this post is focused on automation and programmability, I’ll point out a key hub for automation in the Hitachi Developer Network, the new Hitachi Automation Director Developer Community. If you’re not yet familiar with Automation Director, you will want to be. The Community offers service templates, user guides, discussions and question and answer forums. This is a hub for developers to obtain downloadable templates to use within Automation Director to automate provisioning for specific applications and workloads, as well as network, server and storage.  Also available on the Community are downloadable plugins used to create your own custom templates to automate for and integrate with Cloud Management Platforms like ServiceNow, Ansible, Chef and Puppet. For example, here is a link to a downloadable VMware vSphere Plugins Pack.


Hitachi has embraced the API economy, making it possible for Hitachi infrastructure to be consumed in an agile manner by any cloud, making our automation combined with the resiliency and data availability, management and protection for which Hitachi is legendary, available to all.  Look for more to come around Hitachi and automation this Fall.


Hitachi Solutions for VMware

Hitachi Developer Network

Hitachi Automation Director

The specified item was not found.

Converged Systems

Storage Management


Storage Systems

It’s a happy new year for Hitachi as our own Manju Ramanathpura @itsthenetwork is elected to the 2016 OpenStack Board of Directors. I ran into Manju at Café Hitachi and over a cup of tea (Earl Grey for me, of course), I had a chance to congratulate him and ask him a few questions about his election to the Board and his involvement with OpenStack.

Manju, how are you involved with the OpenStack community?

I have been playing the role of evangelist, helping to promote OpenStack and to position OpenStack for success. Specifically, I have arranged or been part of about 8 - 10 panel discussions which focus on discussing the benefits of OpenStack for enterprise data centers. I have also identified gaps in OpenStack that need to be addressed by the community. I was the first member from Hitachi to join the OpenStack Enterprise Working Group and was part of the original team to define the objectives for this working group, created to improve enterprise footprint for OpenStack. I also write several blog posts and participate in Twitter chats to help promote OpenStack and identify areas where OpenStack can continue to improve.

What do you do for Hitachi around OpenStack?

I have been the ambassador for OpenStack inside Hitachi and instrumental in defining and driving OpenStack strategy.I also engage with our strategic customers to help them achieve their goals by working jointly on Hitachi and OpenStack initiatives. I have also socialized the OpenStack Enterprise Working Group inside Hitachi. Several other Hitachi members are currently or have been part of this working group at various times, including Albert Tan @atan_work, Greg Knieriemen @Knieriemen, Paula Phipps @paulaaphipps, Norma Rangel and Steve Garone @sgarone. For the past 2 plus years, I have represented Hitachi at OpenStack Foundation Board Meetings to speak on behalf of Foundation Gold members and Hitachi to position OpenStack for success.

How is Hitachi contributing to the OpenStack community from a technology perspective?

Hitachi continues to grow its code contributions over time. Here is a Stackalytics chart showing Hitachi’s contributions as of Jan 2016.



Folks, that sums up my chat over a cup of tea with Manju. Congratulations Manju on your appointment to the OpenStack Board of Directors!

As for me, I’m excited to be actively participating in the OpenStack Enterprise Working Group. I’m pleased to be helping to mine data from Mid-cycle meet ups last Summer and from the OpenStack Tokyo Summit last Fall to help the group identify top use cases to write up and evangelize to foster OpenStack enterprise adoption. I’m also gearing up for the OpenStack Austin Summit this Spring. I look forward to serving as a Track Chair of the “Enterprise IT Strategies" track to help in my own small way to form the final agenda for Austin.

Want to learn more about Hitachi and OpenStack and open source?

I spent the last week in October in Tokyo at the OpenStack Summit. I am an OpenStack Summit Noobie. For the uninitiated, this is slang for someone new to the OpenStack Summit events. The number of smart people per capita at the event was high and the aura surrounding them, electric. I found myself dashing to the general sessions where open source Gurus Mark Collier @sparkycollier, OpenStack co-founder, and Jonathan Bryce @jbryce, OpenStack Executive Director, pumped up the crowd. OpenStack super users like Yahoo! Japan impressed with their deployment success stories.


From there, I dashed to the Marketplace with its sea of exhibitors. I learned that at the OpenStack Summit, it’s all about being – well -- open. Exhibitors touted open source software, with hooks into OpenStack in some way, shape or form – with some offering solutions running the OpenStack software itself, while others offered complimentary and compatible infrastructure. At the Hitachi booth, we talked about how our storage, server and converged platforms take advantage of drivers and APIs for delivering services via an OpenStack cloud. Conference goers got a preview of automation technology for OpenStack cloud infrastructure using Hitachi Automation Director software. They also learned that Hitachi partners closely with big open source behemoths to create cloud solutions; that Pentaho, a Hitachi company, open sources its software and that Hitachi is an OpenStack Gold Member.


The collective knowledge in the sessions is staggering and impressive, and it’s coming from all directions, corporate (companies that use OpenStack to help run their businesses), vendor and individual members alike. Some of the sessions I attended were about how to manage multiple hypervisors and bare metal workloads together and how to achieve 99.999 availability in an OpenStack environment.  Sessions were mainly technical, but business types are definitely interested. I attended a session that offered a VC take on OpenStack for the Asia Pacific region and there was another session that delivered a media perspective on OpenStack. Industry analysts and other pundits at the event predict there will be fewer OpenStack distributions down the road. It will simply be too much for the ecosystem to leverage the many that exist today. From my vantage point, OpenStack has technical legs and seems poised for an influx of business people looking to develop more ways to monetize it.  Here is a link to the OpenStack Summit Tokyo 2015 Recap.


I met face-to-face with some of the interesting and smart people I’ve been working with on the OpenStack Enterprise Working Group. This group’s mission is to identify and remove barriers to enterprise adoption and deployment of OpenStack. I recently helped research top challenges around resource management, gathering input from key technologists.  Back on the home front in Silicon Valley, I plan to stay engaged and let my OpenStack geek flag fly.


#WeAreOpenStack  @paulaaphipps   Hitachi Community | Innovation Center | OpenSource

Software-defined infrastructure is the foundation for success in IT-as-a-Service (ITaaS) deployments, or for that matter, any modern IT environment demanding extreme flexibility. The key to "no-hype" success is to be software-defined, yet application-led. You’ll need an infrastructure that can address the unique needs of different application environments, whether they are long-standing systems of record or greenfield apps born in the cloud and on the web. Deploying the right combination of software depending on the unique needs of your business applications is critical. Join me @paulaaphipps and my colleague @knieriemen for a discussion titled, “Software-Defined Infrastructure: The Journey from Hype to Happening.” RECORDING at this link Here is a sample of what was discussed at the session...


The prescription for software-defined success starts with these three tenets:


§ Abstract physical infrastructure to become more agile. Tear down physical boundaries and embrace logical resource groupings. The result will be easier change management, better return on IT assets and flexibility to take advantage of new business opportunities faster.


§ Access to multiple data sources from a common software-defined infrastructure is the foundation upon which analysis and insight  are drawn; the kind of insight that businesses can use to get and stay ahead of the competition.


§ Automate operations to simplify your environment. Increasingly, IT generalists or virtualization administrators are expected to take on the task of managing more of the infrastructure landscape. Automation removes risk and repetition, keeping these high-value experts from spending their time on low-value details, as well as removing the human error factor.



There’s much more to discuss... RECORDING of Tue, Sep 15, 2015 session at this link

In a recent post I blogged about Software-Defined Infrastructure and suggested there are three major themes when designing a Software-Defined Infrastructure: Automation for simplicity, Access to more data for insight and Abstraction for agility. Today, as Hitachi announces advancements in Software-Defined infrastructure, I’m blogging about infrastructure automation.


In a new IDC White Paper, sponsored by Hitachi Data Systems, Research Director, Storage, Eric Burgener writes, “Increased automation is required to enable administrators to simultaneously deal with data growth, SLOs, and continuing just to "keep the lights on" in a massively scalable data center environment that is becoming increasingly heterogeneous.”*


I have a cake analogy that I like to use. I’m an expert baker (ok, maybe pretty good is a better description) and every year at the holidays, I bake my family their favorite cake from scratch; they love it, and I enjoy doing it. But most of the time, I’m busy with life’s responsibilities. So, when my family craves cake, I go to the grocery store and choose a cake mix and a container of icing in the flavor we want (the right kit for the right “application”). Then, there are those situations where it makes the most sense to go online and simply order a cake from the bakery and I’m done (i.e. self-service).


Automation does two things well; it reduces risk and repetition. The business value is in saving both time and money, compelling to any business. It reduces risk by eliminating the opportunity for human error. Even the most fastidious of administrators are human (although given the criticality of their role, super-human might be a better description), and given the number of hours they spend managing infrastructure, the occasional errant command is inevitable. According to a survey by 451 Research, 39% of storage outages are caused by human error. If automation can reduce this statistic, savings can be realized. A survey from the Ponemon Institute in 2013, puts the average cost of data center downtime across industries at approximately $7,900 per minute. (A 41 percent increase from the $5,600 in 2010.)  You can do the math; the costs are staggering.


The same 451 Research survey found that 29% of time is spent on lower-value administration and provisioning. Repetition can be all but eliminated using time-based or event-based automation, reducing time spent on lower-value infrastructure details and tedious tasks.


Infrastructure automation can be segmented into several “categories.“ Guided workflows provide recommendations based on known best practices. Policy-based automation uses settings and scripts to react to events or anomalies and APIs (Application Programming Interfaces) allow programmatic control over your environment. Pre-defined templates for provisioning based on specific workload types is yet another form of automation. Read this post from my colleague Richard Jew to learn how Hitachi is delivering service catalog for storage provisioning.with Hitachi Automation Director.


My colleague @paulpmeehan has written an informative blog about Software-Defined Infrastructure, including VMware VVols, which offer simplified VM-centric storage and policy-based management. And finally, the ultimate in automation is to outsource infrastructure management altogether.


Automation can be a boon for those without a deep knowledge of the underlying infrastructure details. Other candidates include IT generalists, who are increasingly being asked to manage across server, storage and network, as well as virtualization admins who are stretching to manage across the entire range of infrastructure resources.


Expert infrastructure administrators with deep domain expertise appreciate automation because they understand the monumental effort that goes into infrastructure design, development and maintenance, making the decision to offload management details an easy one.


Hitachi has offerings for infrastructure automation that fit each of the categories I’ve described, so explore today’s announcements via @bmadaio's blog and put some automation in your life. It’ll be a piece of cake.


*Source: IDC White Paper, sponsored by HDS, Hitachi Data Systems Introduces Its Software-Defined Infrastructure Play, April 2015



Talk about software-defined anything and everything has been a hot topic these past couple of years. Whenever I can spare a few cycles, I brew a hot cup of Earl Grey tea and set out to catch up on the software-defined news.  After digesting all that I’ve heard and read, I’ve concluded that beyond the typical high-tech hype, there is something fundamentally right about an infrastructure that uses software as its secret sauce to change faster and keep up with the staggering pace of business.


Why is it that IT infrastructure needs to be so agile and flexible?  Business isn’t a one-size-fits-all proposition – think finance, insurance, telecommunications, retail, education, government – these all have unique demands. Likewise, the IT applications and workloads that power these businesses aren’t completely alike either and they’re not static. Applications and data continually change and grow according to the latest business initiatives.


In my research, the giant webscale companies are often held up as prime examples of software-defined infrastructure. They’ve exhibited some pretty impressive results using stripped down architectures where software defines and then redefines infrastructure personality. Everyone in IT infrastructure can learn something from what these companies are doing. So does that mean all infrastructure should emulate the big webscale do-it-yourself giants?


Well in theory, yes; but in approach, probably not exactly. Something I hear much less about is the differences between giant webscale companies and enterprises. Analysts have said that webscale companies typically have 1 to 4 major workloads. Large enterprises may run more than 100 applications, both legacy and next generation. Webscale companies have invested in deep vertical integration and in-house expertise in infrastructure and have massive economies of scale. Enterprises need to focus first and foremost on their core business, have smaller economies of scale and most are just getting started with DevOps.


What I like about working at Hitachi is we recognize these fundamental differences and are prescribing an approach that is software-defined, yet application-led. This means that first enterprises need to think about the application being deployed and how it helps run their business; next, how software can make it run better, easier and faster. There are three major themes when designing a software-defined infrastructure:  Automation for simplicity, Access to more data for insight and Abstraction for greater agility. This might include automating and virtualizing legacy applications or using new scale-out architectures for greenfield applications where the intelligence is completely in the software. I anticipate Hitachi will deliver on these three “As” by putting its own twist on webscale-style infrastructure and by extending its more traditional infrastructure solutions through software.

Right now, my Earl Grey is cold and it’s time to pour a fresh cup and get back to work. But stay tuned, because something is definitely brewing here at Hitachi and announcements that fill Hitachi’s prescription for software-defined infrastructure are expected soon.   --Paula