Network Attached Storage​

 View Only

 Gx00 NAS and HDT

  • Flash Storage
  • Flash Storage
  • Hitachi Virtual Storage Platform VSP
  • Network Attached Storage
  • Hitachi Network Attached Storage NAS
  • Storage
  • Hitachi Unified Storage HUS
Conrad Lombard's profile image
Conrad Lombard posted 07-31-2019 01:20

We currently are looking at beefing up the performance of our Gx00 fleet to provide fast Commvault backups to a NAS share (Built-in modules). Trawling through the partner portal there seems to be a lot of documentation and guides on how to build a NAS tiered pool and working with HDP, but no guides as to laying out SDs and tags on HDT pools. 

Part of our strategy is to boost the SSD component to 400TB and the remaining 600TB will be SAS. Obviously we will lose too much space if we confine SSD to T0 in the NAS. 

Can someone point me to the best practices guide for using SD's based on HDT? Or how do I work out the tags for each SD and let HDT figure the rest out

Thanks!

Conrad


#HitachiVirtualStoragePlatformVSP
#HitachiUnifiedStorageHUS
#HitachiNetworkAttachedStorageNAS
#FlashStorage
Albert Hagopian's profile image
Albert Hagopian

Thanks to Salesforce, I'm able to triangulate you to a recent escalation and can see that you've created a tiered filesystem and have pinned DP Vols from tier 1 of the HDP pool to T0 of the TFS. And the HDT pool is 3-tiers - not sure if I've seen a three tiered HDT pool using HNAS as Commvault Backup target. In the last year, we've had some great success stories with simply R10/NLSAS HDP pools in CV backup environments (but that's just FYI).  The span layout is often a contributing factor in HNAS performance because of the multiple stripeset expansions of 4 SD's. Some of this can be mitigated by using higher q depths (tags) from the HNAS (man sd-set).

There are no best practices (for HNAS) when using HDT, particularly when using a 3 tiered pool.

Are your plans to eliminate a tier in the storage?

Why did you decide on 3-tier HDT pool in the first place?

Probably too many questions to ask, rather than more succinct answers.

Please engage your local acct team so they can formulate additional Q&A for internal consumption and thoughts.

Simon SzetoAlan HoplaGary Dunn

Rainer Ferchland's profile image
Rainer Ferchland

Hello, I hope I understood your question.

We built a pool on the storage side containig 3 tiers.

Let the storage side do the work. You can create tiering schedules, policies and use "Active Flash".

Peng Yu's profile image
Peng Yu

I have an environment of HNAS unified G600 which is using 3 tiers pool. So far it looks like the HDT can handle the performance, is it?

- SSD tier got 98% space used and taking care about 52% IOPS.

- SAS tier got 90% space used and taking care about 13% IOPS.

- NL-SAS tier got 56% space used and taking care about 31% IOPS.

Maybe we need more space on SAS tier to take care more workload.

pastedimage_2

Albert Hagopian's profile image
Albert Hagopian

If space is running out on T0, you'd likely want more there as well.

Would be interesting to "look" at HNAS (performance info report) and see how the NAS is responding to the array. It's great to hear that a 3-Tier HDT is going well for your workload.

Nathan KingRobert Wood

Greg Loose's profile image
Greg Loose

Colin,

Look at the Frequency Distribution Graph which should be available from the Tier Properties window. This will show you how much capacity produces which level of workload and is a good indicator of how big each tier should be (and how many tiers you should have).

You could also look at the Tier performance utilization (i.e. PG % busy) to get an idea what the actual effect is on each tier.

Greg Loose's profile image
Greg Loose

By design, T0 should always be full (unless the other tiers are already empty and there is not even enough to fill up T0 or unless tier policies are in use to artificially limit how much data from each volume is allowed in T0). Data is not prevented from moving up to a higher tier just because it has low access, but it will get held down to a lower tier if other data that has higher access has already filled up the higher tier. 

Peng Yu's profile image
Peng Yu

Actually, it is not busy HNAS, but the response time was bad before without SSD and SAS tier due to initial design problem. NL-SAS was the nightmare. We can disk latency around 10-15ms from HNAS aspect.

Later on, suggested to add SSD on the top. last year, it looks like nothing improved. I prefer to have more SAS tier rather than SSD if I was here at that moment.

Since we decided to put three parity groups of SAS as middle tier, we see the improvement. Now we see less the disk latency records over 10ms than the past.

pastedimage_1

Albert Hagopian's profile image
Albert Hagopian

>>NL-SAS was the nightmare.

Indeed, we've had several customers add a higher level expecting immediate results, which is not often the case(at least with an HNAS workload). Very happy to see that things are better with adding more SAS to middle tier, unfortunately that's really all that I can discuss w/o looking deep into the bowels of HNAS performance data and triangulating to the storage subsystem.

Peng Yu's profile image
Peng Yu

From Tier properties windows, it shows the space what we already had in each tier is enough, is it?

pastedimage_3

From parity groups utilization view, all looks like not busy, but I believe IO response time will be high as soon as the data need to be retried from NL-SAS parity groups since the most IO is random reads.

pastedimage_1

IO response time status showed similar from HNAS disk latency and performance monitoring about v-vols.

pastedimage_2

Greg Loose's profile image
Greg Loose

There is not quite enough resolution to see the detail but for ballpark numbers...The frequency distribution plot (mostly looking at the blue line) shows that you have < 1/2TB of pages in the pool with IOPH (IO per hour weighted average) above 10 which would could have significant benefit from being on flash. The next 17.5 TB of capacity has an IOPH in the range of 1-10 and is appropriate for SAS (but would perform better if it was on flash) and the rest has IOPH of 1 or less and is appropriate to be on NL-SAS (but would perform better if it was on SAS or flash) 

And if you look at just the T2 line, you can see that most of the capacity on it is below 1 IOPH. In fact it even looks like T1 is big enough that some ~1 IOPH pages rank high enough to get in T1.

Now it could be that even though individual pages are not getting heavily accessed in a tier (which would cause them to have a higher IOPH and be promoted to a higher tier) you could have a wide access range keeping the tier busy, just not in any specific pages, that's why you also have to look at the PG utilization. And if you are not accessing pages frequently in low IOPH pages but when you do it is bunched together in a spike, then you will feel that much more when it is on a lower tier.. 

So it does look like your plan to grow T2 is probably appropriate because you are still seeing too high (for your preference) response times for some data and it sounds like that is higher priority than saving $ on cheap NL-SAS. And although performance would be better if you expanded T1 flash, it is not worth the ROI vs SAS. You already have T1 big enough (probably bigger than the cost justifies) but there is justification (ROI) for having some flash.