Possible only with Hitachi Unified Compute Platform running SAP HANA.
As of the date of writing this article, Hitachi is the World Record holder for SAP BW-AML Benchmarks. But this article is not about Hitachi and SAP and records, it is about you and what this means for your business.
I have worked with Enterprise Data Warehouse (EDW) systems in the past, spanning Petabytes of data and billions of records – which used to take us minutes of querying time and hours of delta load time. What we have now is not just a change but a transformation in terms of how we think about EDW and Data Marts.
Imagine being able to query a billion records of data in 5.6 seconds; being able to load delta updates into your enterprise data warehouse in 309 seconds. Imagine real-time and sub-second query response changing the way your Business Intelligence dashboards respond and interact. With Hitachi and SAP HANA.
The BW - Advanced Mixed Workload (AML) benchmark is designed to help customers choose the best for their business. It is a specifically designed data workload consisting of both data loading as well as querying phases utilizing BW on HANA.
Here is how it works:
- Data Load Phase (measuring query throughput at a defined number of initial records):
- The data flow starts with a data load from the source object into the corporate memory layer, containing 1.3 billion records per 1 data set. The data set stored in the source is fetched and propagated through the different layers in 25 load cycles. In other words, 1 load cycle processes 1/25 of the data set stored and this process is measured as Advanced Navigation Steps/Hours
- Query Phase (runtimes of complex queries and complex delta loads):
- KPIs measured in the phase are "Query Throughput" and "Query Runtime”, corresponding to two different types of queries, simple and more complex ones from a BW on HANA execution perspective. There are 10 complex query runtimes captured, resulting in the KPI Normalized mean runtime single query test. Finally, a delta load of phase 2 loading, transforming and aggregating 50 Million records results in the Total runtime delta load/transformation test KPI.
A few details of the complexity pattern of this benchmarking exercise to be able to reach as close to real-life scenarios as practically possible:
- Data Model:
- Inserts / Updates / Deletes on transactional data during delta loads
- Master data changes along with delta loads
- Utilization of navigation attributes
- Multiple currencies in source data
- Stand-alone data loads to measure data load times
- Master data lookups, non-trivial transformations
- Randomization of selection of key figures, changing set of key figures in navigation step
- More sophisticated use of FEMS queries and exception aggregation
- Currency conversion
- Extension of all queries and navigation steps on multiple part providers
- Different query patterns for InfoCube / DSO multiproviders
We did a benchmarking run of 4 Billion records and achieved the following results:
- Phase 1: Advanced Navigation Steps/Hours - 20,360 steps
- Phase 2: Normalized mean runtime single query test (seconds per billion records) - 5.56 seconds
- Phase 2: Total runtime delta load/transformation test (seconds) – 309 seconds
Now, that’s a win for customers, isn’t it?
Exciting times ahead!