How High-End Storage Went from Solo Stardom to Fronting A Mega Rock Band
For years, high-end storage (arrays designed for maximum resiliency and scaling) was considered the rockstar of the storage industry. There was no problem it couldn’t solve. Exponential data growth? Use high-end storage. Massive performance requirements? Use high-end storage. Want your boss to look good? Yep, high-end storage.
Today though high-end storage has lost some of its shine. Industry pundits are quick to say smaller, all-flash arrays can be used to meet high performance needs and cloud can be used as an alternative for storing bulk data.
So is this the end for high-end storage and if so, why are leading IT organizations still buying it?
If all you have is a hammer…
Let’s start by acknowledging that high-end storage clocked in massive growth for years because it was THE solution for meeting enterprise performance and capacity requirements.
And it did this well, allowing IT teams to sleep soundly at night while operations zipped along. But as the range of storage choices expanded, IT leaders began implementing a mix of offerings to more optimally meet SLAs and cost points. That split how budget dollars were spent, with all-flash and cloud cutting into high-end storage sales.
In addition, the resiliency of high-end storage meant that IT organizations could use them longer before doing refreshes, further reducing new high-end storage sales. Capacity sales? Sure! New controllers? I can wait a little longer…
With that in mind, you can see why some might make the claim “high-end storage is dead.” But if that was true, why aren’t incumbents with cloud and all-flash offerings in their line up killing high-end storage arrays and focusing on converting all their customers over to all-flash arrays and cloud?
The reason is that high-end storage still serves an important function in IT strategies and can add significant value, if deployed in the proper use cases. It’s just that the target is smaller now (vs. being the side of a barn).
Like anything else, you have to use the right tool for the job. Choose the right solution and you deliver a positive ROI as well as higher end user satisfaction. Make the wrong choice and you may miss SLAs, achieve a lower ROI and upset end-users.
So where does high-end storage make sense? In conversations with customer teams I consistently hear that the use cases have evolved from ‘everywhere’ to those where uptime and predictability are key:
- Revenue generating workloads that affect corporate profits
- Workloads that run 24x7x365 and can take very minimal downtime if any
- SAP HANA / Oracle analytics (where flash memory is most critical on servers)
- Cloud based XaaS and IoT service production data (not lower priority analysis data)
- Mainframe operations (not sexy, but where larger companies process their business data)
The reasons for using high-end storage here is what you would probably expect:
Resiliency and Uptime
High-end storage platforms have architectures designed to eliminate single points of failure that compromise data access due to unplanned events (component and environmental failures) as well as planned events (upgrades and data migrations).
As an example, online vendors and cloud service providers choose the VSP G1000 to manage their systems of record because it is built on an architecture refined over 20 years to prevent downtime. At the hardware level the Hi-Star grid architecture prevents component failures from compromising data access. This is combined with intelligence in SVOS to prevent unplanned outages and planned outages (e.g. predictive fault analysis & bypass, metro-clustering, 3DC replication, storage virtualization and non-disruptive migrations).
A quick side-bar to demonstrate why this matters. One of the cloud architects I know told me about an IoT thermostat he purchased last year. He loved it, until the day he couldn’t access it. Tech support swore the issue was on his side. 24 hours later they called to apologize. Apparently the cloud service that managed connectivity to his device had failed causing his outage. Oops.
Simplicity at Scale
One of the most challenging aspects for any IT department is managing growth. Provisioning resources, deploying protection policies and minimizing IT waste (e.g. unused storage capacity) gets harder with every new system you implement. Having a system with advanced management tools and long term scaling can be the difference between success, a budget miss or a 2 AM freak-out because there was no data protection policy implemented. If you’ve been in IT long enough, you know the mantra: Beware systems that spread like rabbits.
Looking again at the G1000, many enterprise data centers purchase it because of its long term scaling (up to 6.9PB raw), high performance using flash with hardware accelerated compression and advanced tools (Automation Director, Data Instance Director) that simplify provisioning, budget management and data protection.
Predictable Performance at Scale
Another factor that goes into high-end storage design is the ability to deliver predictable performance - even as a storage fills up. While mid-range systems and many all-flash arrays may see performance degrade significantly as the capacity (or connected hosts) increases, a high-end storage system can be designed to better manage loads over time by using scalable front-end and back-end controller modules.
For instance, the VSP G1000 does not use a commodity x86 server for the controller. Instead it is designed with scalable front-end host controllers and back-end storage controllers to keep IOPS high and latency low. By scaling front end controllers you can balance host connections and replication tasks to prevent latency issues. This is especially important for DR operations that you want completing fast to protect data. Similarly, back end controllers can be scaled to minimize the need for daisy chaining of storage shelves, which adds latency to storage IOPS and degrades performance of even the best all-flash array.
As you can imagine, this does focus where high-end storage is used. But they are important areas and when resiliency is critical, you still can’t beat high-end storage. Especially if you want advanced data services that startups are just now developing or a 100% data availability guarantee like you get with the VSP G1000.
Rockstar to Rock Band
The reality check here for IT leaders and vendors is that the days of a single product solving all problems is behind us (at least today). To be successful, an IT strategy needs to leverage multiple offerings.
High-end storage, all-flash arrays, hyperconverged infrastructure appliances and cloud solutions all deliver unique values depending on the problem you need to solve. When combined properly, IT organizations can meet the challenges ahead and be ready for whatever new workload or business goal comes their way. But when you try to substitute one for the other, you may start losing sleep and miss on key business requirements - like uptime - and deliver a strategy that is out of tune.
Ok. So that title may not be relevant to everyone. Basically it means that you need to be careful what you say now, because it make come back to vex you later.
Today, the IT industry has given AFAs ‘rockstar’ status and many vendors offering only AFAs are pitching that they can solve every IT challenge. Sound familiar?
It’s an exciting message and one we all wish was true. Seriously, how great would it be if all R&D could be focused on one thing! But the reality is that AFAs are part of a broader IT strategy and there is a new wave of high performance technology already coming. That means AFAs are going to have to share budget space with new products. And for vendor’s that only offer an AFA… hitting massive year over year growth will be hard.
As Ferris Bueller said, “life moves pretty fast.” To stay ahead of the curve you need a band that can cover all the beats.