Mark Adams

NVMe 101 – What’s Coming to the World of Flash?

Blog Post created by Mark Adams Employee on Aug 28, 2017

There were two hot topics at the annual Flash Memory Summit held in Santa Clara, CA earlier this month.  The first was the unfortunate fire that kept the Exhibit Hall closed for the entire show. And the second was the coming wave of NVMe adoption in flash storage products.  Discussion about NVMe was included in nearly every keynote and session presentation that I attended. This blog is the first in a multi-part series that will address the ways NVMe is expected to change storage, how customers will benefit, things that must be understood by potential buyers and Hitachi’s plans to bring products to market that will utilize NVMe.

What is NVMe?

NVMe is an open-standards protocol for digital communications between servers and non-volatile memory storage.  For many of us, non-volatile memory storage means solid state drives or Hitachi Accelerated Flash FMDs.  The NVMe protocol is designed to transport signals over a PCIe bus or a storage fabric.  Much more on the emerging fabrics for NVMe will come in a later blog.


NVMe is expected to be the successor to SCSI based protocols (SAS and SATA) for flash media.  That’s because it was designed for flash while SCSI was developed for hard drives.  The command set is leaner and it supports a nearly unlimited queue depth that takes advantage of the parallel nature of flash drives (a max 64K queue depth for up to 64K separate queues).  Within queues, submission and completion commands are paired for greater efficiency. See figure 1.

Command queueing.png

Figure 1 – Source: NVM Express organization


The protocol streamlines communications by removing the need for an I/O controller to sit between the server CPU and the flash drives.  Storage controllers are freed up to do tasks that don’t involve managing I/Os and the onboard DRAM doesn’t need to store application data.  It’s also ideal for the next generation storage that’s known as Storage Class Memory (SCM).  SCM, which promises to be many times faster than today’s flash drives, is not likely to use SCSI protocols as NVMe is much better suited.


While storage has seen new protocols developed with great promise and hype (FCoE anyone?), NVMe is not brand new.  It’s been deployed in what’s known as “server-side flash” for several years now. Basically, many servers support the installation of an NVMe flash card option on a PCIe bus within the server.  This has been used for applications that require extremely fast response times.  One example of this is Flash Trading, which is a type of High-Frequency Trading, where even a few micro seconds in delay can mean the difference between a great trade and a mediocre one.  However, server-side flash has a limited set of use cases due to its relatively small capacity and inability to share storage resources across a network for greater efficiencies and better data protection. 


NVMe is widely supported in nearly all operating systems and vendors across the entire ecosystem of external storage are now working to make end-to-end NVMe adoption possible.  This includes the makers of drivers, host bus adapters, network adapters, flash drives and storage controllers.  There’s a maturation process underway in developing all the pieces and making them work in manner that customers expect for their enterprise applications.  Many of these development challenges will be solved in the short-term.  But some fundamental differences will remain between how NVMe storage will operate and the operations of traditional SAN systems that we’ve become very accustomed to.  These differences will be felt in ways that data services such as snapshots, replication, data reduction and RAID protection . Also, PCIe bus limitations will impact how scalability will work. Careful planning on how and where to use NVMe is going to be crucial because there will be trade-offs between NVMe and SCSI-based storage.  This topic of planning for NVMe will also be covered in detail in a later blog.


What are the advantages of NVMe?

NVMe is all about performance.  Depending on which set of benchmark comparisons you look at, all-flash arrays (AFAs) that have NVMe can achieve many times more random read and write IOPS (2 to 3x) and far more bandwidth capacity for sequential reads and writes (2 to 2.5x) than current generation AFAs.  NVMe also has far less latency than SAS (up to 6x) so the wait time to complete an operation is much lower.  The latency advantage is something than nearly every user will benefit from. Not many users will ever need millions of IOPs but most will enjoy blazing fast response times that stay consistent even when their system capacity fills up.


In the right architecture, NVMe performance has greater linear scalability.  If you need more performance, then add a drive.  It’s also capable of supporting more VM density. And more applications can be consolidated on a single system due to its higher performance ceiling.  But the right architecture and design is required to take advantage of these performance scalability and consolidation advantages.  Today's AFA's with with dual controllers or limited scale-out will quickly saturate with just a few NVMe drives installed.


Finally, NVMe drives will consume less power per device than SAS drives due to advanced power management which has more states than is available to SAS flash drives.


Which use cases will benefit the most?

Any application that requires high levels of performance will benefit from NVMe storage. The primary use cases for the initial deployments of NVMe are expected to be:

  1. High performance computing – Examples include projects like climate science research, drug interaction simulations and Bitcoin mining which all require demanding levels of computational resources with access to data at very low latencies. 
  2. Real-time analytics – An increasing number of environments are requiring real-time analysis of data. Fraud detection, for example, works by looking for irregularities that might be coming from a group of transactions.  Real-time analytics is also the basis of future artificial intelligence and machine learning applications that are coming from IoT deployments.  These applications will benefit from the high read IOPs that NVMe storage can deliver.
  3. Large-scale transactional processing – Online retailers, large financial institutions and governmental service agencies may need to process thousands of transactions per second.  In these write-intensive environments, NVMe storage will become the preferred solution.


What will be the deployment architectures for external storage?

  • Software defined storage (SDS) – Some customers, looking to reduce their costs, will want to use a proven storage operating system and management interface to run their commodity hardware.  The commodity hardware can have NVMe included as long the storage OS supports it.
  • All-flash arrays (AFA) –  Other customers may need to improve their performance to levels that are better than what they are getting from their AFA today.  We hear from users of competitive AFAs that their performance isn’t consistent and latency make spike during periods of heavy workloads or when capacity starts to fill up. NVMe will resolve these issues and still allow customers to utilize many of the data services and management workflows that they rely on.  In AFAs, NVMe might be deployed in the “back-end” (over PCIe between the storage controller and the flash drives), “front-end”(over a storage fabric) or end-to-end (from the server to the flash drives).  However, data services, SAN overhead and data protection that are common in AFAs will take away from their ability to realize the full potential of performance improvements that NVMe is capable of.


  • Hyperconverged infrastructure (HCI) – For virtualized and cloud-scale applications that need more performance, it makes sense to redesign the nodes to utilize NVMe over PCIe for communications between the servers and the flash drives. This is the fastest way to make use and deploy NVMe for customers to accelerate their virtualized workloads in a production environment. The speed and nature of setting up HCI in a customer environment allows organizations to accelerate the deployment of their projects from weeks to hours.  And scaling nodes to add more capacity and compute resources can be accomplished without impacting ongoing operations.



  • Rack-scale flash – This is very dense external flash storage that can be shared by multiple application servers.  Servers and storage are connected via very high-speed Ethernet which can run at 100Gbps speeds today and with even faster transport speeds coming soon.  Unlike AFAs, rack-scale flash deployments move most of the data services (snapshots, replication, compression, etc.) to the host so the storage performs as fast as possible. Rack-scale flash will have the highest possible performance and the best price/performance ratio.




What are Hitachi’s plans?

Hitachi views NVMe as an important technology for delivering maximum performance and highest value to our customers. Our intent is to take a multi-prong approach to implementing NVMe technologies that covers the broadest set of workloads and use cases to give our customers the best solution choice for their needs.


Hitachi’s first NVMe product to be offered is our hyperconverged, Unified Compute Platform, the UCP HC. Recently, we announced support for NVMe caching on the UCP HC. This allows application workloads to be accelerated without a significant investment by customers. Using NVMe as a caching layer within scale-out HCI environments allows customers to make use of the highest performance media distributed across the cluster. This is beneficial for workloads that have ultra-low latency requirements. NVMe persistent storage will follow shortly. Our expectation is that growth here will occur steadily through 2018 as pricing on NVMe starts to reach the level of current generation SSDs. 


We believe that starting our introduction of NVMe with UCP HC allows our customers to leverage NVMe for improved application performance and do so by starting small with the ability to quickly and easily expand as needed.  This avoids the need for a high CAPEX investment in the technology right up front.


The next phase of our NVMe evolution will be to deliver offerings aimed at the applications best suited for NVMe – extremely high performance large-scale analytics and transactional operations. We’ll cover why these applications are well suited to leverage larger NVMe data stores in a future blog, but suffice to say this is where we seem tremendous opportunity and alignment with the larger Hitachi analytics portfolio.


Support for NVMe in our AFAs will come at a later date when price, performance and maturity of NVMe stabilize. In our view, other AFA vendors that have recently announced NVMe products have not fully considered how their designs must change to achieve low latency and high performance while still delivering enterprise data services and non-stop availability. This is particularly important as many customers rely on their AFAs to store their business records. More information about our future direction can be shared under NDA by our Hitachi sales reps.


Read more in this series:

Is NVMe Killing Shared Storage?

NVMe and Me: NVMe Adoption Strategies

NVMe and Data Protection: Time to Rethink Strategies

NVMe and Data Protection: Is Synchronous Replication Dead?

How NVMe is Changing Networking (with Brocade)

How NVMe is Changing Data Center Networking: Part 2

Hitachi Vantara Storage Roadmap Thoughts