Creating and managing copies of production data for the purpose of recovery is a fundamental responsibility of every IT department. However, as organizations embrace digital transformation and data center modernization projects to remain relevant in their respective industries, the methods used to create local and remote recovery copies also need to be modernized.
The reasons for this are obvious. There is too much data and not enough time to back it up. Indeed, critical applications cannot tolerate any “backup window”. The diversity of the IT environment leads to very complex recovery service level requirements. Each application or data source requires different recovery point and recovery time objectives (RPO and RTO). There is also the need to protect against a widening range of threats, including simple human error, system failures, ransomware attacks, and local and large regional events.
To deal with the maze of requirements you may have enlisted the use of multiple “point solutions”, perhaps something for remote disaster recovery and something else for high availability; something for file systems and something else for VMware; and something for each different database application. This just adds to the overall complexity, cost and risk. Unfortunately, there is no single tool that can meet all of the recovery requirements of a modern enterprise. There is, however, a single software solution that can orchestrate each of the different tools required to meet the service level requirements of each different application and data set.
That orchestration engine is Hitachi Data Instance Director. HDID lets you combine data copy and move technologies, choosing the right one for each job, and create a single end-to-end policy-based workflow that covers all of the copy data management tasks for a given data source. Create fast, frequent, application-consistent snapshots that meet the most rigorous RPO and RTO. Add in a nightly incremental-forever backup for longer-term retention on your choice of storage, including our Hitachi Content Platform object store.
Automate two-site high availability with Hitachi’s global-active device storage clustering feature, and long-distance remote disaster recovery with Hitachi Universal Replicator. All of this can be easily configured on HDID’s whiteboard-like interface (below) or through its REST-based API, without the need for the complex replication management files and custom scripting that you may be familiar with.
HDID can do a lot more than just simplify your ability to recover from a data disaster. It can also automate the creation and refresh of physical or virtual copies for secondary business functions, and support data governance and analytics functions using copy data. Follow this blog to learn more.
Rich Vining is a Sr. WW Product Marketing Manager for Data Protection and Copy Data Management Solutions at Hitachi Vantara and has been publishing his thoughts on data storage and data management since the mid-1990s. The contents of this blog are his own.