Legacy HDS Forums

limits on hds9585

Discussion created by Legacy HDS Forums on Sep 19, 2007
Latest reply on Feb 15, 2008 by Legacy HDS Forums

Originally posted by: jesseb



Hi all,

question about our hds 9585:

What is the largest lun one should create ? and are there downsides to creating larger luns?

Here is the situation: we currently have five   3.2Tb cluster file systems built on top of 6+1 raid5 groups.

RGs are 1.6Tb mostly split up into 800G luns (some are split up into smaller) and then put back together with the volume manager on the host (linux). Well, due to some unpredicted node sprawl we have way more hosts than originally planned in our veritas cluster and have started experiencing scsi errors on the hosts.

We are pretty certain these are the OS responding to queue full conditions from the array since our (luns x hosts X hba queue depth) calculation puts us past the 512 queue depth per port on the 9585 by quite a bit.

As a first step in fixing this we are reclaiming an extra port that was used by our netapp gfiler and using it for host access instead, and are considering dropping the queue depth setting on our hbas. A better long term fix (we think) will be reformatting our raid groups into larger luns (fewer luns mapped per port on the array should fix the queue exhaustion problem, but will it create other problems?)

We have also noticed that our R/W ratio is lower (8:1) than expected so we are also considering trying raid10 instead of raid5

Any thoughts would be appreciated.
regards, Jesse

Outcomes