Network Attached Storage​

 View Only

 How to increase HNAS performance using SSD disks ?

Sergio Garcia's profile image
Sergio Garcia posted 10-20-2021 23:07
Hello. I have a VSP G400 Unified with NAS Module. I have 1 pool for HNAS with 2xPG 6D+2P @10TB NLSAS (HDP). This pool is working slow and the customer requested to improve the performance. We proposed to add 2x6D+2P @1.9TB SSD disk (20TiB) to move HDP to HDT pool​ but our professional services engineer said it´s not possible due the "stripset is 8 volumes" and the minimum capacity to grow must be 53TiB or 4x6D+2P @1.9TB SSD (it is too expensive to add in SSD and probably it becomes a concession). Can someone please validate if there is some way to configure the new pool using 2PG SSD + 2PG NLSAS?.

I share with you the screenshot of HNAS:

Thank you for your help.
Sergio Garcia
Patrick Allaire's profile image
Patrick Allaire

Dear VSP G400 Unified user,


File system fragmentation over time is typical; this change usually taxes HDD storage pool as I/Os evolve from mainly sequential at initial provisioning to a more random pattern.

Assuming performance statistics confirm the system is random I/O bottlenecked, then adding an SSD tier may improve overall performance, specifically with metadata.

Since your question referenced a HDP to HDT pool change (seemingly using tiered file system approach by expanding a stripeset), I thought the below explanation would be helpful prior to moving forward beyond HDP best practice for HNAS.

It is possible to upgrade a thin provisioned HDP pool to a tiered (HDT) pool, but this option may not provide the expected benefits!

For OLTP/database workload, it is typical to see higher I/O density on a subset of the HDP pages that store metadata, indexes or logs.  Tiered pool will use access statistics to promote most active pages on higher storage performance tier.  This backend storage optimization works well since write I/O operations perform a data in-place overwrite (change-modify) operation.

 

But with NAS, overwrite I/O operations append the change-modify results on next available block based on the NAS pool design, which in turn, will render HDP page access profile useless to promote heavily used data.  Consequently, HDT pool are not recommended with NAS.  Instead, best practice is to use a tiered file system approach instead.

 

That said, for cold archive (i.e. low change rate - mainly read I/Os), HDT pool is likely a valid solution:

There could be several approaches:
      • Add the SSDs to the HDP pool and make HDT/Active Flash
        • If thin provisioned DPVols exist and SD’s still have “room” after adding the capacity, there is no need to expand the stripeset
        • If the DPVols are not thinly provisioned, simply expand the stripeset by the same capacity as original SD’s using the same same quantity (8)
      • Add the SSDs in a different HDP pool, then expand the span and create a tiered filesystem, though this will require downtime in order to convert from non-tiered to tiered
        • A subconjugate/combination of the two methods is to add the SSDs to the HDT/Active Flash, pin DPVols to the SSD tier and convert to tiered filesystem
        • In order to determine if Tiered Filesystem Use case is beneficial, you can run “fs-analyze-data-usage” to see if there is benefits. 


While Hitachi best practices associated to new stripeset are intended for optimal performance, they are not applicable when converting an untiered storage pool to a tiered storage pool. (i.e. using span-tier command).  So you can experiment with much smaller amount of flash capacity.  For optimal result, consider mirrored SSD parity group instead of RAID6; see best practices for tiered file systems for further details.

 

Looking forward to hearing what option(s) you elected.


Patrick Allaire