In the face of a report of a slow application from users what is your first inclination as to the suspect...
You: "What did you change"
You: "I don't see anything wrong, I'm opening a ticket with the storage vendor"
At least at a really simple level that's what I often see. More often than not the problem isn't the storage, but a combination of bottlenecks, and lately it's increasingly the host....
1. Slow Drain Hosts stalling the fabric (more about that later)
2. Hosts with massive LUN consolidation
3. Increasingly mixed workloads on the same LUN
4. NPIV and other host side resource sharing
5. Outdated OS handling of storage (2-4 above +more).
Now of course Item Number 4 above plays a role in the 3 items previous to it. At the heart of it we have an OS model that is too often geared for:
Either the desktop storage that up until recently comprised the lions share of connected storage.
Small LUNs dedicated to one OS, and one workload.
Most OS's storage stacks weren't designed to parcel out storage to multiple VM's. Most OS storage stacks weren't designed to be run against a virtual LUN & virtual adapter that the hypervisor OS provides. Too often the OS allocates resources on a per LUN basis, which is the enemy of Larger LUNs, Consolidated LUNs, and Mixed workloads.
Across the last 10 years OS vendors have made huge leaps in improving the HBA storage stack, and the model most prevalent, and what OS vendors tuned for and did well was Lot's of (smallish) LUNs. Now with Virtual environments, massive data growth, and falling storage prices, most everyone is pushing towards much larger LUNs, often concentrating what would have been 10->100->1000 LUNs into one...
Looking deep in the mists of my crystal ball my prediction is that we will see major changes to the internal logic of how OS's handle storage over the next set of releases.
So I was pleased to see one potentially large change scheduled to appear in the 3.13 Linux Kernel, that may make big improvements to #2, #3, and #5..
Multi-queue Block Layer http://kernel.dk/blk-mq.pdf is a major overhaul of the whole middle storage layer in the OS. While it is mainly targeted at faster local storage (read SSD's in the host), it opens the door to big improvements in how storage is handled by the OS.
Looking at this change, you can see where one day we might have smarter QOS and COS within the host to handle I/O to the same LUN differently. Wouldnt it be nice if I/O from 2 VM's on the same datastore could be treated differently rather than fair queuing.. Where today we have the issue of 2 VMs on the same LUN impacting each other, tomorrow we might have the ability to prioritize production database Log I/O over development host backups.