Zane Appel

Thoughts about Oracle’s New 12c Storage Optimization Strategy vs. Hitachi Dynamic Tiering

Blog Post created by Zane Appel on Apr 17, 2014

I have recently reviewed the features of Oracle’s new 12c database and their emerging Storage Optimization Strategy that they are using to market the database and by extension their underlying storage, including Exadata Storage, Pillar Axiom, and Sun ZFS Storage Appliance. Oracle has added enhancements to their 12c database that they are using to support their new strategy, which is meant to meet some of the same challenges that HDS arrays solve using Hitachi Dynamic Tiering (HDT.)


The new Heat Map feature is central to allowing Oracle to optimize data storage. The Heat Map tracks reads and writes of database rows and can tell you how “hot” or active a particular table or table partition has been recently. Using the Heat Map data, the new Automatic Data Optimization (ADO) feature allows you to define policies to compress data with different levels of compression or even to move data to another tablespace, which can be on a different storage tier. By default, data compression and relocation is done during the defined database maintenance window but it can also be done on demand by a Database Administrator or via script. A nice feature of the Heat Map is that it automatically ignores database maintenance tasks, such as statistics gathering and backups, and can also be disabled at the session level, which allows it to ignore other maintenance tasks as needed. Oracle uses this feature as a selling point against storage-based tiering solutions, such as HDT. (The same thing can be done with HDT by configuring it to skip monitoring during regular maintenance windows, backups, etc.)


HDT is considerably more nimble because it can relocate data every 30 minutes, if needed. Oracle’s ADO approach appears to only allow compression of data after a minimum of one day of no modification. (User-defined criteria are also allowed, for example, it can be configured to compress a row if the status is ‘closed’.) By default it will only relocate data to another tier when the current tablespace is full. It is also important to note that the ADO seems to be one-way only. You can compress or relocate data when it meets certain criteria but I haven’t seen anything that allows you to, for example, move data back to a faster tier if it suddenly becomes hot again.


Additionally, ADO is complex to configure. It requires you to manually define tiers of storage, present those tiers as LUNs, define tablespaces on the different tiers, and then to define the ADO policies. All of this is automated using HDT. Nothing in the database needs to change at all. Pages are moved automatically according to the tier management policies.


ADO is also the gateway into Oracle’s Hybrid Columnar Compression (HCC.) This is a feature of Oracle that has been around since 11gR2 and Oracle describes it as “enabling 3x to 5x reduction in storage footprint versus (other storage) for Oracle databases”. A great feature of HCC is that it actually improves performance because the database does not have to decompress the data in order to query it. Even though this is a database compression technology, it only works on Oracle storage. Oracle is marketing this as one of the major reasons to adopt the “Oracle Stack”, which includes applications, database, operating systems (Oracle Enterprise Linux and Oracle Solaris), servers (Exadata), virtualization (Oracle Virtual Machine), and storage.


In summary, from my perspective as a DBA, HDT is still quicker to react and easier to configure than Oracle’s new features. However, Oracle now provides a tool that will allow automatic optimization of storage, which is new for the database. In the marketing materials that I have seen, Oracle downplays the drawbacks and ignores other tools out there, like HDT. They also use this as a major feature to sell their “Oracle Stack”. If I were an Oracle DBA who didn’t know what is available in the storage world, it might look attractive.