Hi,
I would like to ask to share the experiences in the Community, if anyone has used the HDT feature in their environment . I am going to test the HDT but before I do I would like to know what people so far have experienced. The feedback points I am looking at :
- Response time degradation during the page relocation ?
- Monitoring the HDT and it's challenges in a shared environment in terms of costing it ?
- Impact on the running application if any , while relocating the pages ?
- With 42 MB chunk movement, what savings and Performance benefits were found ? I would say even a smaller chunk relocation would have benefited and accommodated more application with less flash usage.
- Policy configuration, individual application or IO profile grouped policies and their cycle of relocation used ?
- Any tuning capabilities used to change the HDT mid-way ?
- SATA pool write-verify causing performance dip ( every write involves a read to confirm the write happened )
I can Google it and there are like 1000 pages of pdf telling me all about it, but I wanted to get the feedback from the real production environment or people who have actually used it and worked on it for a longer period of time , so the information is more practical then theoretical.
Even if I get some feedback only few points is fine, but would appreciate a response .
Cheers
RB
Hello, Rahul.
I have worked fairly extensively with HDT in our training environment and have written technical courses on HDT. Are your applications going to be that sensitive to I/O response times? And with HDS storage, the I/O response truly comes out of the storage system's cache! There is no or negligible impact to I/O performance with HDT. It is easy to configure and manage and is very flexible!! You can change pool configuration while the pool is in full operation - now here is where you need to be a bit careful - adding volumes to a pool and especially shrinking a pool, now those operations can have an impact on performance. So while the pool continues to operate, it is very wise to carefully schedule physical pool changes.
Pool Monitoring is easy and robust and supported both through GUI and CLI. You can certainly configure pro-active threshold warnings.
Let's see, Hitachi's choice on 42mb page size for migration is actually much smaller than some of the competition - where the relocation chunk size is 1GB or more. There is really no flexibility or choice on the 42mb page size. Hitachi seems to make "balanced" choices on internal architecture design and ,in general, these seem to work quite well for the majority of applications. (Of course, there are always the exceptions....)
There is a lot of granular configuration and migration policy management at the individual volume level. However, Hitachi's intent is to let the HDT wisdom manage the environment! For many skilled and long-time administrators who are used to a lot of fine-detail control, turning this over to the system's logic is sometimes the most difficult challenge!
You have control over the SATA write verify configuration but this is an installation attribute of the physical RAID group in the storage system. It is a good idea to be sure this is set correctly according to your needs when the HDDs are installed in the system as it is rather difficult (and disruptive to the RAID group) to change it.
I will look around to see if I can locate some more specific performance data that could be shared with you.
In the meantime, HDT, try it! I think you'll like it!
Janet Hutchison
HDS Academy
janet.hutchison@hds.com