Enterprises make copies of the critical data sets for assorted reasons, such as: a copy for backup and fast, local recovery; a copy in one or two other locations for business continuity and disaster recovery; copies for the test and development teams; copies for finances and legal; and so on.
If these copies aren’t automated, controlled and secure, they can become costly and a serious liability.
Let’s start with the basics and walk through an example of how Hitachi Vantara, through the use of Hitachi Data Instance Director (HDID) and the range of technologies that it orchestrates, can help organizations automatically create, refresh and expire copy data.
Our main data center is in New York. In it, we have a production application server, let’s say it’s an Oracle database environment. The Oracle data is stored on enterprise-class storage – in this case an Hitachi Virtual Storage Platform(VSP) F-series all-flash array.
Now we need to make a periodic copy of the data for local backup and recovery. The old method of taking an incremental backup each night and a full backup on the weekend doesn’t work anymore. They take too long; often many hours to complete a backup. And they leave too much data at risk; a nightly backup means a recovery point objective (RPO) of 24 hours, which means as much as a full day’s worth of data is at risk of loss. Neither of these are acceptable service levels for critical applications and data.
So instead, we’ll take an hourly application-consistent snapshot using Hitachi Thin Image, which is part of the storage system’s Storage Virtualization Operating System (SVOS). The snapshot can be created as frequently as needed, but once an hour already improves your RPO and reduces the amount of data at risk by more than 95%.
Next, we also have a data center in Boston, so we set up replication to another VSP there to enable business continuity. Since the latency between the sites is low, we can use active-passive synchronous replication (Hitachi TrueCopy), guaranteeing zero data loss. Or, we can support an active-active configuration to enable always-on operations, using the VSP’s global-active device storage clustering feature.
We can also have a 3rd site, let’s say London, connected by asynchronous replication, using Hitachi Universal Replicator, to protect against a major regional outage such as the power blackout that impacted the northeast corner of the United States in 2003. Most areas did not get power restored for more than 2 days, and those businesses that did not have a disaster recovery site outside of the impact zone were severely affected.
Flexible 3 data center topologies are supported, including cascade and multi-target. An additional feature called delta-resync keeps the 3rd site current even when one of the two synchronized sites goes off-line.
Now that our data is completely protected from terrible things happening, we want to create additional copies for other secondary purposes, such as dev/test, finance, long-term backup, etc.
We can create space-efficient virtual copies with our snapshot technology. Or we can create full copy clones using Hitachi ShadowImage. Either way, they are created almost instantaneously with no impact on the product systems. When needed, the copy is mounted to a proxy server and made available to the user.
All of these copy operations may require multiple tools, complex scripting and manual processes. But with Hitachi Data Instance Director, we offer a way to automate and orchestrate all of it, combining these steps into a single policy-based workflow that is very easy to set up and manage.
We can then take this automation to the next level, by creating service-level based policy profiles. For example, think of gold, silver and bronze services, which are selected based on business needs for the particular application. These profiles can determine the frequency of protection, the tier of storage to use, user access rights, retention, etc.
Everything we’ve talked about can be easily tied into the Hitachi approach to data center modernization. For example, as Hitachi Automation Director (HAD) is provisioning the resources needed to spin up a new application workload, it can automatically provision the correct tier of data protection services at the same time. The communication between HAD and HDID is via a robust RESTful API.
In the near future, Hitachi Infrastructure Analytics Advisor and Hitachi Unified Compute Platform Advisor will be able to monitor HDID and recommend opportunities to improve copy management processes.
To learn more about Hitachi Vantara's approach to modernizing data center operations, check out these blogs:
- Transforming Data Center Focus from Infrastructure to Information by Hu Yoshida
- Data Center Modernization by Nathan Moffitt
- Infrastructure Agility for Data Center Modernization by Mark Adams
- Bundles are Better by Summer Matheson
Rich Vining is a Sr. WW Product Marketing Manager for Data Protection and Governance Solutions at Hitachi Vantara and has been publishing his thoughts on data storage and data management since the mid-1990s. The contents of this blog are his own.