Today, many of our users want to sift through case studies on performance to prove that "this technology is better than that one". To be fair, so do we! I am frequently asked to provide "proof" to the sales guys of our superior competitiveness.
Why? Well, if we can help you to reduce the cost of a transaction it really helps improve the bottom line of your business. For example Exadata, is the biggest beast today. Most of you will be using Oracle (we certainly do), and if you believe the graphs and the brochures, it dramatically increases the velocity at which you can process data, thereby reducing the cost per transaction. Yet, as many of us know, there are "lies, damned lies and statistics"*. And given a chance, we can prove performance without using (generic) statistics.
When trying to create a level playing field the standards bodies do their best to cause identical means of creating a comparison between systems. SPC-1, TPCC, JetStress, etc., are all generic tests that are great in spirit, however, they do leave a lot of "wiggle room" for the people performing the test to be flexible in their application of the rules. For example, in order to achieve maximum throughput on Jetstress did a certain NAS provider short stroke the drive? Did they load it up with data across the whole platter or just leave large parts of it empty? Thereby returning numbers far higher in statistics than a production system could ever achieve? Or a well know TP platform provider building out a configuration for SPC benchmarking that was far from a real configuration, and one, again, you would never run in production, so as to confuse the market into thinking it could deliver sustainable performance way beyond its real production capability?
Are they just ensuring a pass to a test? Performance where there really is none?
The answers lie in the often difficult answer to what materials are being valued and measured. Occasionally, there is a really good, fair study. This is where it is actually performed according to the specific application criteria of that customer. Recently I saw this happen, where the customer, who has been buying Exdata, listened to our plea to try UCP for Oracle in their environment, we were pretty sure we'd give them a great return.
They needed the environment to behave in a particular way, so they did a capacity test, a performance and throughput test and a test running differing block sizes against differing numbers of servers. Because these tests were not generic, and because they reflected the actual production requirements of their systems, they found that they would save 20 Oracle licenses per device over the life of 3-5 year life span of each device. And they have dozens of them......
What is ironic to me is that the cost of the hardware is immaterial, its way less than the saving in licenses! UCP for Oracle has paid for itself as soon as it is in production in this case!
The trick is to set up the test to reflect how it needs to perform like it would in your production environment with all those variables to demonstrate the calculation. You have to look at the test and ask yourself: “does this test behave the same way as my applications would behave?” If not, I can't use it. So if it doesn’t – you should do a proof of concept yourself. You can come to HDS to prove things out with you or for you, at HDS or on your premises.
In this particular instance, we gave them a UCP and held their hands while they did the test to prove conclusively to them that there are significant savings. Of course, the problem for our sales guys is, we can only use this data anecdotally as this is prorpietary data, for now at least.
*attributed to Benjamin Disraeli circa 1875