Today’s digital business processes are all about speed. All-flash arrays, parallel processing, faster networks, intelligent automation, real-time analytics – the list goes on and on. To support this need for speed, it has become necessary to remove the latency of writing data to disk in the database platforms that underpin these digital processes. Hence the development and rapid adoption of SAP HANA, the in-memory relational database management system.
In technology, there always seem to be trade-offs, such as price vs. performance. In the case of SAP HANA, the memory used to store and run the database and its applications is volatile, meaning, if the power goes off on the system, all the data is lost. That could be a problem, especially since you’re basically running your business on this platform.
To overcome this, as a default SAP HANA writes an incremental snapshot of its data to disk every 5 minutes, in the background with minimal impact on the performance of the production environment. This “save point” may contain both consistent and “dirty” data, so SAP HANA also saves an Undo Log to help bring the database back to a consistent state during recovery. With this process, if the power goes off, no more than 5 minutes of new data will be lost.
However, there are many other bad things that can happen beyond a power failure (which should never happen in a modern data center, right?).
You may need to recover from a database corruption, typically caused by human error but more frequently by malicious agents. Or your site may be taken down by a local or regional disaster. Or anything in between. As I write this, we’re just entering hurricane season on the southeast coast of the U.S., large parts of Texas have seen record flooding recently, California is expecting another hot, fiery summer, and there have been any number of major earthquakes across the globe in the past few years.
To address these challenges, Hitachi Data Instance Director (HDID) now supports SAP HANA environments, with modern, non-disruptive, storage-based local and remote replication orchestration, using the proven technologies of the HDS Virtual Storage Platform (VSP) family and Unified Compute Platform (UCP) converged systems.
HDID uses the APIs prescribed by SAP to put the database into a backup-ready state, which commits all new in-memory data to disk. HDID then creates a scheduled or ad-hoc application-consistent, space-efficient snapshot or a full-copy clone. Once the copy is created, it can be mounted to make it available for secondary processes, including backup to tape or virtual tape.
HDID also coordinates synchronous and asynchronous replication to second and third sites for true business continuity and disaster recovery (BCDR) with zero data loss. HDID can also create snapshots and clones of the SAP HANA database from the remote copies for re-use by other business functions such as Dev-Test, Legal (e-discovery), Finance (period-end reporting), and more.
And all of these protection and availability processes can be linked in an easy-to-create and easy-to-manage policy-based workflow, using the unique whiteboard-like HDID user interface. HDID makes it fast and simple to create business-defined policies onto which you just drag-and-drop source and destination elements.
The Bottom Line
If you are crying the backup blues because you aren’t protecting SAP HANA well enough, or because your current solution is creating a major drag on your business performance, take a look at how easy it is to set up and test an HDID policy in this recorded demo.
And check out this solution profile to learn more about how modern data protection, based application-consistent storage-assisted snapshot and replication technologies, can transform your business performance by eliminating almost all of the downtime associated with traditional backup and recovery solutions.
To request a personal demo of HDID from one of our data protection experts, please contact DP-Sales@hds.com.
Rich Vining is a Sr. Product Marketing Manager for Data Protection at Hitachi Data Systems and has been publishing his thoughts on data storage and data management since the mid-1990s. The contents of this blog are his own.