I know there has been some discussion about this in the past, but this is the internet, everyone loves a chance to have another 'go around'. Take the chance to add your 5¢ worth to the topic
I've never been able to get a clear idea of how big people think pools should be (The best answer I've received was "How much data do you want to recover?"). There may well be a physical limitation (although I suspect that none of us want to be hitting that!). But we have adopted a lower pool count and larger pools, some up to 180TB to minimize management complexity. What approach have you taken and why?
I know that there is already a poll for HDP/HDT pool count (take a look at the content from Steven Ruby ). We've have gone down the smaller count and larger size path, with the aim being to remain below a limit of 8 HDT pools in any sub-system. So what's the pool count per VSP at your site and why have you chosen it?
Once more with a view to reducing management overhead, we have adopted a mixed workload approach. Storage is allocated from pools as required and we then aim to 'tune the pool' by adjusting the tiers to maintain the desired performance, this fits well with a lower pool count.
How are you using your pools - application specific, workload specific, gold/silver/bronze? What playoffs between management/performance/risk do you consider?
My first swerves around spinning but makes tracks within disk.
My second froze out solid but lies down in state.
My third spools from tape but isn't mounted in drive.
My fourth dazzles in flash but was forgotten in memory.
Whatever you do, I will only get bigger.