Nathan Moffitt

Tiering: The Technology AFA Vendors Hate, But Need

Blog Post created by Nathan Moffitt Employee on Feb 28, 2017

Hey. I’ve got a secret for you. You know that tiering thing the all-flash array (AFA) vendors have been telling you is so horrible? Turns out it might not be so horrible after all. In fact, tiering may be the most important item AFA vendors have on their roadmap.


Shocking, right? How could a technology so maligned as complex and unnecessary in ‘modern architectures’ be something that all-flash vendors are racing to embrace? Well, it has to do with a few things IT leaders have on their short list to investigate and implement:


  • Public / hybrid cloud
  • Large capacity SSDs
  • NVMe / PCIe Flash


These technologies break apart the fragile ‘truth’ AFA vendors have been pushing: that a single type of storage – SSD flash – will handle all of your storage needs.




The most obvious place where this message breaks is cost. An SSD-based AFA with data reduction can lower storage costs, but it cannot match the capital and operational savings of cloud. Which is why almost every company is looking at cloud for data protection, development and even long term retention of data (see detail in 451 Group chart). For long term data retention, businesses are hoping cloud can help them:


  • Buy less storage (they lease space at a predictable, set cost)
  • Pay lower data center costs (eliminating power and cooling costs that grow every year)
  • Outsource management and maintenance costs


There are several ways to do this, but cloud tiering is one of the easiest to implement. If an array has built-in cloud tiering, like the HDS Virtual Storage Platform (VSP) and SVOS, you can transparently move data to a cloud resource without massive changes to application workflows or end user experiences.


As a result, expect almost every AFA to support some level of cloud tiering in coming years.


NOTE: For the purposes of this blog I’ve kept been very brief. There are a lot of considerations that go into tiering. If you want to discuss – let us know. 



Let’s say you aren’t into the whole cloud thing because your data is sensitive or recall requests happen too frequently. There are 2 flash specific trends that make tiering INSIDE an AFA important.


The first is related to SSD size and IOPS.


Like HDDs and even tape before that, SSD manufacturers are constantly pushing to deliver bigger, more dense SSDs. These larger capacity SSDs enable you to retain more data in a smaller footprint, but they don’t necessarily boost data transfer speeds. As a result, the IOPS / GB drop.


Why does this matter? Having more capacity means you can put bigger applications (or more applications) on a system and use less data center space. But as more applications share capacity from a drive (or a single application puts a larger amount of data on the drive), there is contention and IOPS / GB becomes a factor in overall performance. If this sounds like the return of HDD issues well… it is.


Think of it this way: Say you have a kitchen that delivers food across 1 conveyer belt. Then you triple the size of the kitchen. You can make a lot more food, but the speed at which it leaves is the same. That backs things up as more people (workloads) request more data.


This is why many AFA vendors are considering tiers of flash. Small, high speed flash drives to use for near term storage backed by larger, but ’slower’ SSDs.


Adding to the probability that you’ll see flash-to-flash tiering is NVMe.


For brevity, I won’t dig into NVMe here, but the 2 second version is as follows: NVMe is a new protocol stack designed for high-speed data transfers. It is combined with high speed device connectivity (PCIe vs. SAS) to deliver gonzo fast speeds.


That’s the good news. The bad news is that NVMe isn’t quite ready for massive enterprise deployment. Scalability, cost and high availability limitations will slow NVMe displacement of ‘traditional SSDs’ for now, but you should expect NVMe to (wait for it) be implemented as a tier of storage. That tier might be:


  • Server side, for high speed analytics.
  • Storage side as a read cache for high speed read access
  • Storage side, as a small high speed ingest pool / visualization pool (working set data that will either be discarded or migrated to an enterprise array for safe keeping).


No matter how that tier is implemented the hard truth remains: tiering is the future of flash. Because SAS based flash is slow (compared to NVMe) and getting slower (lower IOPS / GBps as capacities increase).




The net here is that tiering, like most things, is coming back. For mature storage vendors with robust software tiering, this represents an opportunity to help customers embrace new technologies – and do it faster than AFA vendors with no background (or code) for handling tiering. For AFA vendors without tiering… Well, let’s just say that I’d bet they’re working on code now. And new marketing pitches.