My last blog post (Flash Leadership: Redefined) spoke about the Hitachi Data Systems flash quarterly shipment results and suggested that future posts would explain WHY we believe we are seeing such rapid adoption.
Let me start with this basic idea: Hitachi has done the hard work in flash engineering to make it easier on our customers to deploy and manage flash.
Many established players have “just added SSDs” or bought new “flash silo” technologies that didn’t inherit much of the work they had done on their platforms over time. The “just add SSD” strategy is certainly better than nothing if you your underlying array technology is strong enough, but industry standard approaches lead to industry standard results, not exemplary ones.
One reason vendors have gone this route is the race to “off the shelf” (aka, commodity) componentry in hardware design. While there are benefits in this direction, following it with abandon to reduce vendor costs and fundamentally outsourcing key parts of hardware engineering means compromises must be made.
Hitachi has chosen a different path. We want to enhance customer agility through software like our new Storage Virtualization Operating System, but we also want to offer unique, best-of-breed hardware – as well as any off-the-shelf architectures we may support or release.
This is a harder path, but we think it is worth it. In fact, we have already done the hard work to:
- develop unique flash capacity devices that bring enterprise-class levels of flash performance and capability
- go back and optimize our core storage operating system software for Flash
- ensure flash is as simple to manage as any other capacity media
- build underlying hardware systems that have the architectural headroom to get the most out of flash storage
Hitachi continues to do the hard work and have over 60 related hardware and software patents granted or pending in this space already, many of which haven’t even been introduced into our shipping technology yet.
We think this matters, because with IT Infrastructure, either your vendors are going to do the hard work or you as the IT practitioners are going to have to do the hard work.
What does that mean?
When Hitachi builds differentiated capabilities, our customers get superior performance, higher availability and keep the management and software tools they depend on. They do not have to introduce an entirely foreign – and often unproven – architecture into their critical computing environments to truly exploit flash.
It also means Hitachi customers can get better performance with LESS to manage. And we stand behind it with publicly available benchmarks including Storage Performance Council (block) and SPEC.org (file.) Though imperfect, they are much more transparent than what is typically done, and why even this basic step isn’t demanded by all customers remains a mystery to me.
We have not yet finished Storage Performance Council (SPC) testing on HUS VM with our Accelerated Flash Storage (a fast growing flash and all-flash combination for us), we were quite pleased with how Virtual Storage Platform did when we added our flash software enhancements and Hitachi Accelerated Flash (HAF) storage capacity. (Results, here.) Of course, we’ve already announced the Virtual Storage Platform G1000 system and you can expect a significant jump in performance across all workloads.
For our HUS 100 family, we have public benchmarks with "industry standard" SSDs. Hu Yoshida talked about the results in this blog, and they highlight the strength of the underlying architecture as our "midrange" system showed response times that were a only a quarter of the most logical competitor at that time. Our internal testing of HUS 150 with the newly supported HAF even surprised us and would improve HUS 150 performance across the board compared to SSDs.
There are, however, more all flash unified storage results at Spec.org which demonstrate the power of combining our advanced block and file controllers into unified AFA configurations with our unique HAF solution.
First there was a unified HUS VM, and I covered how its results compared to the competition here. In summary, a 2-node Hitachi NAS (HNAS), HUS VM configuration offered the lowest average response time ever measured in the test, and the 4-node HNAS, HUS VM configuration surpassed the brand new EMC VNX2 system while requiring only a fraction of the file and flash hardware.
With the launch of our new unified Virtual Storage Platform G1000 with 8-node HNAS capability, we once again tested HDS all flash storage performance and saw outstanding benchmark results. (See them, here.) The unified system delivered overall performance of >1.2M NFS operations per second with an overall response time of 0.75 milliseconds.
Let’s pause on that for a second. We (like pretty much everyone else) come out at launch with “marketing benchmarks” and so called “hero” numbers. They are unrealistic, but they are (we argue) better than nothing as a stake in the ground.
This however, was an actual benchmark test delivering over a 1.2M IOPS with tremendously low response time.
Our competition (when they even bother to do public testing…) would need multiple times the amount of hardware we leveraged to deliver similar results. Some of the vendors are behind on posting performance, but from what's there today... 20 or more node Netapp clusters? One hundred and forty node EMC Isilon deployments? Ouch.
But, our result?
A VSP G1000, 8-Node HNAS cluster, 128 Hitachi Accelerated Flash modules with HALF of the possible block Virtual Storage Directors installed in the VSP G1000.
A real-world configuration with out of this world performance.
That’s what it means to work with a vendor that does the hard work in engineering. Of course, if you’d prefer to do the hard work of managing and powering hundred node monstrosities, or needing hundreds of SSDs to get what we deliver with a fraction of that, I suppose that’s your choice.
But we don’t think adopting flash should be so hard on customers and based on the rapid adoption we are seeing, it seems the market agrees.