One of the bigger challenges of running in-memory applications in the enterprise is the development of stable and reliable architectures based on strict performance criteria. The drive towards consolidation and virtualization makes management, provisioning and availability simpler; yet all of these benefits of virtualization are sometimes at odds with I/O latency intolerant applications such as SAP HANA. It’s not uncommon for administrators (and some vendors) to be conflicted by the cost savings of hypervisors even though they can un-commit firm requirements for low I/O overhead. The results of choosing the wrong architecture can be catastrophic, especially in a production context: if in-memory applications are unable to guarantee persistence to stable storage, users’ data protection strategies may lack any guaranteed recovery points.
Administrators often use a model where non-production test and dev in-memory and database systems are virtualized through traditional software hypervisor technologies. Production systems are then deployed on bare metal systems where firm guarantees can be designed into the engineered system assuring proper processing and data protection patterns. However, this puts SAP HANA and traditional DBMS at a crossroads with customer objectives to reduce costs by employing techniques like consolidation.
Recent revolutions in IT platforms focus on building well-engineered, complete systems that meet strict business and user objectives. An emerging pattern in engineered systems is what I call “bi-modal architectures”. These architectures center on one mode being designed for low latency and the other mode on deep capacity. In both instances analytics are likely to be included within these platforms to minimize the introduction of performance penalties from too many trips over data center fabrics.
What to do?
Well some companies answer the call by literally building everything themselves from the application, to the hypervisor, to the server and everything else in the stack. Other companies, like SAP and Hitachi, work with best of breed vendors to leverage the right technology for the right objective.
To this end, we have some good news to share: Hitachi and SAP are working together to allow for HANA to run in a multi-tenant manner using both Hitachi’s LPAR Hardware Partitioning technology and VMWare's vSphere Software Hypervisor for production environments. Personally, I think that this is game changing. Customers can use mix and match hypervisor, LPAR and bare metal environments to minimize infrastructure requirements for a variety of scenarios. In particular customers can leverage Hitachi LPARs for production scenarios where multi-tenancy is required and virtualized I/O cannot be tolerated (e.g. stable low latency commitments, strict recovery point objectives, etc.). Moreover Hitachi and SAP together are the only two part of a select set of companies that can offer both a hardware or software virtualized multi-tenant in-memory computing solution to the market.
To start, SAP has driven significant development over the years on HANA, and the magical engineers at Hitachi have done an excellent job at getting the most out of Intel, PCIe, Linux, hypervisors, etc. As part of our unique, joint OEM agreement with SAP, typical deployments are effectively 5x faster than the competition with less memory and less cores which drive significant cost savings up front and long term. Beyond SAP, Hitachi has also driven enhanced performance efficiencies with hypervisor stacks like VMware vSphere, Microsoft Hyper-V and Redhat KVM have also validated our LPAR technologies sufficiently to run their software on top of our LPARs. Hitachi engineering outcomes are brought together in our Unified Compute Platform, were the compute, storage, and networking components are engineered to work together for the highest scale and lowest I/O latency. In fact, Hitachi UCP solutions offer both extremely low I/O latency patterns for applications like SAP HANA on LPAR, and advanced management and consolidation patterns with VMware vSphere. So in this bi-modal architecture world, with Hitachi and SAP, our clients are the winners. They get the ability to leverage the unique in memory architecture of SAP HANA in a true multi-tenant solution where they can choose the right approach for multi-tenancy: Hardware partitioning or Software hypervisors.
Finally, I’m sure most of you by now have heard about the Continuous Cloud Infrastructure announcement we made a few weeks ago. The theme of this announcement is “Business Defined IT” and this really gets to the heart of this challenge enterprises are facing today with SAP HANA: connecting business value to IT performance and value. As you explore your architecture options for SAP HANA, I hope you’ll take a closer look at the clear business and IT value that only Hitachi LPARs deliver with both Hardware and Software approaches to multi-tenancy.
UPDATE - 1
Note with the recent SAP & VMware announcement, which happened the day after our release, production instances of HANA are also possible when running on VSphere v5.5. The body of this post has been amended to reflect this market reality, updates are clearly pointed out via careful usage of strike throughs and additions in dark orange.
UPDATE - 2
As per Sean's comments below the post has been updated to reflect the changes in Hitachi's portfolio. Another interesting tidbit is that there are some additional SAP applications that are now supported running in Hitachi's LPAR. While not germane to the discussions of this post they are and interesting aside. As per the other update additions are in orange and relevant removals are struck through.