Steve Wilde

HNAS File System Expansion - Stripe Sets

Discussion created by Steve Wilde Employee on Apr 11, 2014
Latest reply on Jul 18, 2014 by Michael Ratner



All numbers are examples.


I have a customer who ran into an issue where they created a storage pool of 100 TB.  They created a single file system of 10 TB.  The file system chucks were automatically allocated across the eight underlying stripe sets that comprise the storage pool (comprised of SDs from straight RAID LDEVs, not DP LDEVs).  Some time later the file system was manually expanded by 10 TB.  The new chunks were automatically allocated to only one (or possibly two) stripe sets, not all eight.  This caused a significant performance issue.


The recommendation I received was to create file systems at their maximum desired size.  This, in effect, negates the ability to grow file systems.  I also heard that if you don't manually grow file systems but let the system do it via auto-expansion the issue is bypassed.  A third approach that was recommended was to use DPs instead of straight RAID sets.  I have nothing against this approach other than it doesn't really address the chunk allocation issue, it just mitigates its affect.


1) Is this chunk allocation behavior normal?

2) If it is normal, is there a way to modify the behavior so that chunks are allocated to all underlying stripe sets when manually creating file systems?

3) Are there any other suggestions or thoughts?


Thanks for the thoughts.



Message was edited by: Michael Ratner