Legacy HDS Forums

AMS2100 - strange performance ?

Discussion created by Legacy HDS Forums on Mar 13, 2012
Latest reply on Mar 16, 2012 by Legacy HDS Forums

Originally posted by: przemolb



I am benchmarking our AMS2100 (4GB cache/ctl; 40 SAS drives). I am using vdbench (since I know this tool and have been using it for many years).
I generate 8k OLTP workload changing % of reads (30,50,70) and seek % (30,50,80,100).
The first benchmark was on 1 x RAID 1+0 (8d+8d) and the results are (IOPS):
1920,70
1859,22
1796,47
1776,85
1583,19
1491,57
1487,66
1496,45
1605,31
1526,39
1529,37
1561,08
The second benchmark is on 6 x RAID 5 (5d+1) striped at the OS level (Veritas). The results are:
1605,03
1516,10
1519,20
1565,54
1479,62
1360,92
1313,08
1326,37
1635,24
1465,18
1389,65
1393,20
(the first row from first benchmark is compared to the first row from the second benchmark).
iostat -xnvC 1 gives the following output:
...
                    extended device statistics
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
  213.0  491.9 1703.7 3935.4  0.0  1.2    0.0    1.7   0 112 c2
   44.0   92.0  351.9  735.9  0.0  0.2    0.0    1.6   0  19 c2t50060E80104DE0C0d2
   47.0   77.0  375.9  615.9  0.0  0.2    0.0    1.7   0  19 c2t50060E80104DE0C0d1
   35.0  103.0  280.0  823.9  0.0  0.3    0.0    1.9   0  25 c2t50060E80104DE0C0d0
   20.0   78.0  160.0  623.9  0.0  0.1    0.0    1.3   0  11 c2t50060E80104DE0C0d5
   33.0   56.0  264.0  447.9  0.0  0.2    0.0    1.9   0  16 c2t50060E80104DE0C0d4
   34.0   86.0  272.0  687.9  0.0  0.2    0.0    2.0   0  22 c2t50060E80104DE0C0d3
  221.0  509.9 1767.7 4079.4  0.0  1.7    0.0    2.3   0 148 c3
   37.0   98.0  296.0  783.9  0.0  0.2    0.0    1.1   0  15 c3t50060E80104DE0C8d2
   41.0   91.0  327.9  727.9  0.0  0.2    0.0    1.3   0  16 c3t50060E80104DE0C8d1
   23.0   61.0  184.0  487.9  0.0  0.3    0.0    3.2   0  25 c3t50060E80104DE0C8d0
   38.0   99.0  304.0  791.9  0.0  0.3    0.0    2.4   0  27 c3t50060E80104DE0C8d5
   39.0   86.0  312.0  687.9  0.0  0.3    0.0    2.3   0  28 c3t50060E80104DE0C8d4
   43.0   75.0  343.9  599.9  0.0  0.4    0.0    3.8   0  37 c3t50060E80104DE0C8d3
...

Even if we compare workload which prefers RAID5 (% of reads = 70) number of IOPS from
RG (8d+8d) is a bit better (1605,31 IOPS) then from 6 x RG (5d+1d) (1598,27 IOPS).
What do you think about the results ? I expected much more from 6 x RG (5d+1d) ...

Outcomes