This blog series examines the benefits of aligning data protection processes to business requirements, as opposed to asking the business to accept the limitations of the current backup and recovery infrastructure. This first installment covers the challenges that this approach entails and can overcome.
Over time, the information technology landscape has become exceedingly complex. Each new breakthrough ushers in not only new opportunities to accelerate your business, but also new layers of systems, applications, processes, network protocols and skills. Sometimes these new solutions replace aging legacy infrastructure, but often they simply add more complexity.
"Data Protection is Complicated"
With complexity comes cost and risk. This is especially true when considering how to protect, retain and recover the data that is created, stored, manipulated and exploited by the new systems.
There are many examples of this phenomenon in the history of computing: the transition from mainframe to client/server; virtualized servers; remote and branch offices; mobile workers; big data and analytics; in-memory databases; converged and hyper-converged; and private, public and hybrid clouds. Each of these advances, and many more, have created the need for new solutions for the protection of data, whether it be installing new agent or module for your existing software, or a totally new approach. Either way, it adds complexity.
Your legacy backup solutions are usually not capable of handling the newest technologies. If you are using a solution from one of the big vendors in the backup market, it will probably be years before they develop, test and release a solution for your new system. This opens the door for start-up vendors to enter your data center. They are in business solely to solve your new, specific need, and they often do it well. But they just add one more thing to find, acquire, learn, manage, monitor and report on, maintain, upgrade and eventually decommission.
It gets worse. The new data protection “point solutions” don’t integrate with your existing backup tools, and they often cover only one of the aspects of the data management paradigm:
- Operational recovery.
- Disaster recovery.
- Long-term recovery.
At some point, you will step back and look at all the different tools you’ve deployed to keep all of your data safe, and when you do, you will likely say something like, “This is CRAZY!” Some data sets are protected by multiple tools. Each creates its own copy of the data, with its own retention and ownership policies. Who has control and knowledge of all this copy data? Does any of it meet corporate governance and security policies?
Better questions to ask are: When something bad happens and you experience a data loss, will the right person, with the right training, be able to log into the right system, find and retrieve the right data. Will they be able to restore it to the right place, do it within prescribed service level agreements, and not break anything else along the way?
If you cannot answer these questions with an emphatic “Yes,” you should probably take action before something bad does happen.
The first task is to determine what the business really needs, and this entails balancing the service level requirements for each application or data set with the costs that it will take to meet those objectives. It is unfortunately true that higher levels of protection and recovery will usually cost more, so the goal should be to apply the right tool to each requirement, but do it in a way that avoids the “point solution” complexity outlined above.
A good step is to break up your data into categories, such as critical, important and standard, and then map out service level objectives for each category and each recovery scenario. Below is an example.
In this diagram, RPO refers to Recovery Point Objective, which is a measure of how frequently you perform the protection operation. Traditional nightly backups yield an RPO of 24 hours, and this means that as much as 24 hours worth of new data is at risk of loss. RTO refers to Recovery Time Objective, and this measures how fast the affected system, application or data is returned to service following an event.
You can see that the objectives can vary widely (minutes to days) depending on the value of the data being protected. This can have a profound effect on the technology choices you might make to meet these objectives.
Hitachi Data Systems not only offers technologies to meet all of these challenges, but also an orchestration layer that integrates them all into a single, easy-to-manage solution. More on that in future blogs, or you can visit hds.com/go/protect to learn more now. The next installment of this blog will focus on operational recovery.