The 2016 Flash Memory Summit just closed after a busy week in Santa Clara. I was lucky enough to have time to drop in to parts of the event while splitting time at HDS headquarters plotting to rule the world of modern IT infrastructure.
This is NOT, however, a show review blog post. I did not spend enough time, visit enough booths or sit in enough sessions to be capable of that. And, such posts are largely redundant to the week of near-immediate, self-congratulatory industry tweets and posts we’ve all been inundated with already.
But the event did make me think about the state of our industry relative to flash storage conversations and solutions. You see, while the event was great, it was also a bit too "widget" focused for me and the conversations were heavy on the "inside baseball" stuff of flash device development. And, anyone who knows me knows baseball is not my sport...
While the event organizers are clearly trying to add more each year, what felt a bit lacking to me was the customer-centric “for what” (or “so what”) piece of the puzzle. Perhaps it’s my role or interest level, and there are certainly people at Hitachi that need to spend time arguing about the relative merits of triple-cell NAND vs. goodness-knows-what-next memory, but that's not my hot button.
I prefer discussions about how flash consumption models are changing, how unique thinking can power flash-based solutions that are game changers and I would have loved to hear more customer stories around how real flash deployments helped transform business or technical operations.
Knowing what type of event this usually is, Hitachi's presence was driven largely by our envy-of-the-industry Research and Development team, but we still tried to focus on as many bigger picture “so what” issues that we could – and hope to do that at all times.
One area I was quite pleased with was the fact that we had a wonderful Hitachi Data Systems customer at the event to present on their deployment of flash, and the major operational cost savings moving to a Hitachi VSP G1000 all flash infrastructure provided.
We were very lucky in that Dan King, Director of IT Services at Wellmark Blue Cross and Blue Shield not only had a great story to tell about moving to an all flash infrastructure, but he was a terrific personality and presenter that really woke the up the session after a competitive vendor spent time talking about why disk drives were so important… yup, at the FLASH Memory Summit. (Someone didn't get the memo...)
The short version of Dan’s story? Spending a bit more on all-flash flash storage actually saved him significant money. He massively reduced the amount of time wasted managing or trouble-shooting performance issues and service levels challenges with his internal customers, and since his (and everyone's) total cost of ownership equation is far more weighted to the operational side of things than the capital costs side, it was a very, very simple cost recovery model for him.
Oh, and performance? Report production improved a nearly unbelievable 2200%. Database backups were 440% faster. Daily import job performance increased 352%.
That is the power of flash - astronomically better performance while streamlining operations. And even more so, that is the power of moving to a flash storage infrastructure powerful enough to consolidate multiple applications on. Thanks, Dan, for telling your story.
But, since the Flash Memory Summit can tend a bit toward feeling like a flash storage geek-out session our Research and Development team decided it was time to wow people with a VERY cool flash-based prototype that may become quite an interesting solution one day soon. In fact, before the event, Hitachi issued a press-release (here) about the technology innovation that was demonstrated at the event. Now, that press release is about what you’d expect from the R&D team, but not so much what you’d expect from my blog, so I figured I would offer a different take on what is going on.
This prototype shows that the company with more Big Data patents than anyone (Hitachi) and the more Flash storage related patents than anyone (Hitachi) that has a history of leveraging intelligent hardware offload engines for better performance (such as Hitachi unified storage and Hitachi Flash Modules) has turned its focus toward leveraging all this know-how into a unique system that opens a can of whoop-flash on data analytics performance.
(Oh, and this system alone may add as many as 13 new patents to the portfolio, further extending our lead as one of the most innovative companies around.)
With an architecture that leverages a specialized Field Programmable Gate Array (FPGA) architecture to offload database processes from overworked general-purpose processors and provide up to 100X faster analytics, in a far denser package that costs far less money, a number of large customers have already expressed interest in custom proofs-of-concept and co-development efforts. This prototype leverages Hitachi Advanced Data Binder (HADB) technology that sits under Pentaho to show an "all Hitachi" analytics infrastructure, but the prototype could be adapted to almost any open source database environment.
For a deeper perspective, HDS's own Walter Amsler was recorded at the booth talking up the overall coolness of this solution, as well as some of the other great things we are doing, in this video.
Now that is a cool “so what” when it comes to flash storage enhancements and how to use them to power digital business – and is probably a bit more interesting to customers that want to co-innovate with strategic technology providers than a debate about how many bits per cell will finally kill disk drives.
Lastly, another big thing we are focused on at Hitachi is getting flash storage fully exploited within our customers’ increasingly preferred deployment and acquisition model for IT: converged infrastructure.
Our scalable family of converged infrastructures solutions, are all-in when it comes to all-flash. Our all-flash VSP F Series solutions are now available in our Hitachi Unified Compute Platform (UCP) solutions from UCP 2000 to UCP 6000. You'll hear a lot more from us soon about all-flash hyper-converged infrastructure solutions as well. And with the overall percentage of storage purchases industry wide moving to converged deployments and 25% of our storage systems going out as all-flash, the time for change is now.
Regardless of whether you buy stand alone servers and storage or your preferred style of converged infrastructure, you NEED to understand how all you IT infrastructure deployments will help you get to an all flash infrastructure over time. Mark Adams of HDS Product Marketing did a great job at the show talking about this need, and how it's driven by transformative applications and environments where massive information flow and high performance will disrupt the way we work and live, in his presentation at the Flash Memory Summit, here.
So, while Flash Memory Summit was a great show and Hitachi will likely be involved for the foreseeable future in one way or another, the component level conversations and the complexities of NAND geometry and direction will always be just a little less interesting to me than how customers are using flash to change their business and cope with digital transformation, how we can apply it as part of a new system architecture and how we can simplify the transition to all flash for our customers.
That’s my perspective anyway, and I’m guessing there may be a few others that look at the world this way as well.