Originally posted: 19 September 2016
Updated: 16 May 2017
Today’s digital economy is requiring businesses of all types to re-architect and re-invent their Information Technology systems and services to support the always-on, mobile-first nature of their customers and their workforces. This is true not only for new functions, such as DevOps, but also for long-standing back-office systems, such as the many applications that rely on Oracle databases.
This transformation is all about agility (flexibility and speed), which is enabled by converged and hyper-converged infrastructure, all-flash storage arrays, Big Data analytics and even cognitive computing. As businesses embark on their unique digital transformation journey, one question isn’t asked often enough: How are you going to protect that data?
Actually, this is a question that should be asked even in the absence of a huge transformation effort. Traditional approaches to database backup and recovery just cannot keep up with dramatic increase in the amount of data and the size of the database files that need to be protected, and do not offer protection that is frequent enough to limit significant risks of data loss. And let’s not even talk about how long it takes to recover and restore applications following a data loss event.
You may need to recover from a database corruption, typically caused by human error but more frequently by malicious agents. Or your site may be taken down by a local or regional disaster. Or anything in between. As I write this, we’re in the middle of hurricane season on the southeast coast of the U.S., large parts of Texas have seen record flooding recently, California is experiencing another hot, fiery summer, and there have been any number of major earthquakes across the globe in the past few years.
Another challenge to always-on database availability has been copy data management. It seems everyone, from Engineering to Finance to Legal, wants an up-to-date copy of your data. How much does making those copies, and refreshing them periodically, impact the primary production users of those applications?
Wouldn’t it be nice if you could create copies of your Oracle databases within seconds, and without impacting application availability and performance? And protect them much more frequently to reduce the amount of new data at risk of total loss? And restore them within seconds or a few minutes, no matter how large they are?
When you modernize your data protection capabilities, you can enable the performance and data availability service levels your executives are expecting to gain in their digital transformation plans, including:
- No need for backup windows: eliminate the downtime associated with backup
- More frequent protection: reduce the amount of data lost and needing to be recreated
- Recover from failures or other events almost instantly, instead of many hours or days
- Incremental-forever data capture: reduce the costs of storing redundant full backup copies
Across the board, industry analysts have been saying that the way to achieve these goals is through modern, storage-based snapshot and replication technologies. This is true, and according to Gartner, as many as 20% of all enterprises will have adopted these technologies as their primary method of data protection by the end of this year, up from 12% a couple of years ago. However, snapshots and replication are difficult to set up and manage, especially as new applications and databases are brought on-line, and require special attention to enable application-consistent recoveries.
Hitachi Data Systems offers best-of-breed local, in-system snapshot and clone technologies, as well as remote clustered, synchronous and asynchronous replication to help meet all of your database availability challenges. They are part of the DNA of our storage arrays and NAS platforms, as well as our Unified Compute Platform (UCP) converged systems.
However, the key to making these capabilities useful, agile and reliable is to automate and orchestrate them in a way that makes it very easy to create and manage policy-based database copy workflows that adhere to business-defined requirements. Hitachi Data Instance Director (HDID) provides these capabilities for Oracle databases, as well as Microsoft and SAP HANA environments, with a unique whiteboard-like user interface.
Watch this quick explanation of what HDID can do to simplify the protection and recovery of Oracle databases while meeting the needs of the current and future digital economy.
The Bottom Line
If you are being challenged because you aren’t protecting your Oracle databases and application well enough, or because your current solution is creating a major drag on your business performance, take a look at how easy it is to set up and test an HDID policy in this recorded demo.
If you are technically-inclined, you might like to check out these Oracle protection best practice reports:
- Protecting Oracle Database 12c With Hitachi Data Instance Director in a Single Site
- Protecting Oracle Database 12c With HDID In a Two Site Environment
- Protecting Oracle Database with HDID, Veritas NetBackup and Hitachi Content Platform
And check out this solution profile to learn more about how modern data protection, based application-consistent storage-assisted snapshot and replication technologies, can transform your business performance by eliminating almost all of the downtime associated with traditional backup and recovery solutions.
Rich Vining is a Sr. Product Marketing Manager for Data Protection at Hitachi Data Systems and has been publishing his thoughts on data storage and data management since the mid-1990s. The contents of this blog are his own.