October 12, 2018
For the past few years, I have been expounding on the values of “modern data protection”, with modern referring to using storage-assisted snapshots and replication to reduce backup windows and enable much improved recovery point objectives (RPO) and recovery time objectives (RTO).
It’s time now to start thinking beyond simply making and restoring copies of data more efficiently and to start thinking about how to make better use of those copies, as well as reducing the total number of copies being made and gaining better control of the copies. This is called “copy data management” – at least that’s our definition.
The need to create a backup copy of data for local recovery and another for remote disaster recovery is a given. But those copies do nothing for the organization other than take up storage space unless or until something bad happens. Depending on your backup retention policies and the number of backups you retain, the costs of the backup storage can be pretty staggering, especially if you are storing multiple full backups and not using data deduplication – which itself can be pretty expensive. If you're going to create, store and manage copies for recovery, why not make use of them for other business processes while they're sitting there consuming your storage?
But backup copies are not the only copies we create and keep. Many secondary business functions need copies of production data to do their jobs effectively. Application developers, test and QA engineers, legal and compliance officers, finance, and marketing are examples of functions that need up-to-date copies of data. In fact, analyst firm IDC recently published a report in which senior analyst Phil Goodwin estimates that on average organizations maintain 13 copies of their data. Think about that. When you add 1TB of new production data, you need to provision 13TB of new storage for the copies. The report goes on to estimate that the worldwide cost of storage for copy data in 2020 will approach US$60B.
Part of the problem is that when a copy is requested and created, control and visibility of that copy often disappears. You don’t know what copies still exist, where they are, who owns them or can access them, and what they are being used for. This can be frightening when you start thinking about the new wave of data privacy regulations, starting with the European Union’s General Data Protection Regulation (GDPR).
Copy Data Management promises to not only improve the data protection process, it promises to provide value to the organization even without a looming disaster. It can reduce storage costs by presenting virtual copies of data to test/dev, analytics and reporting. It also can make sure those copies are refreshed so those use cases are always dealing with the latest copy of data.
I will be diving deeper into this topic on a webcast hosted by Storage Switzerland’s George Crump on October 24th at 1:00pm (1300 hr.) New York time. If this topic is of interest to you, please register here.
Rich Vining is a Sr. WW Product Marketing Manager for Data Protection and Governance Solutions at Hitachi Vantara and has been publishing his thoughts on data storage and data management since the mid-1990s. The contents of this blog are his own.