AnsweredAssumed Answered

Performance Benchmarks - Differences with/without Tiering?

Question asked by Paul Hutchings on Oct 12, 2014

We have a new HUS110 with tiering licensed and 2 shelves:

 

  • 20x 10K SAS configured as 5x 2+2 RAID10 (Tier 1)
  • 10x 7.2K SAS configured as 1x 8+2 RAID6 (Tier 2)

 

Our hosts are vSphere 5.5 U2 and are direct connected using FC (QLogic QLE2562's if it matters).

 

HBA's are on the default path policy of Round Robin.

 

Initially I did some testing with just a 10K pool and on an IOMeter run of large 100% sequential read IO I was seeing around 1500MB/Sec.

 

Then I enabled Tiering and added the 7.2K tier and did some benchmarks.

 

I'm seeing around 600MB/Sec on the same IOMeter job with tiering enabled and with 2 tiers.

 

Now my initial thoughts were that it's tiering so it must be reading from the tier of fewer, slower drives, but looking in SNM2 I see that all of the data on the pool has migrated to the top tier, so I don't see why I should see any less performance than using a 10K pool with no tiering as that's where the data resides.

 

We've only had the array a week or so, so I'm still learning my way around, and I know benchmarks such as 100% sequential large IO don't mean that much in the real world, but I don't understand what's happening and would like to since we need to look at migrating production onto this and don't want to start until I know I don't need to do anything disruptive.

Outcomes