HP 3PAR: AO Update…Sorta

I wish there was an awesome update that I’ve just been too preoccupied to post, but it’s more of a “well. . . .” After talking with HP/3PAR folks a couple months back and re-architecting things again, our setup is running pretty well in a tiered config, but the caveats in the prior post remain. Furthermore, there are a few stipulations that I think HP/3PAR should provide customers or that customers should consider themselves before buying into the tiered concept.

  1. Critical mass of each media type: Think of it like failover capacity (in my case, vSphere clusters). If I have only two or three hosts in my cluster, I have to leave at least 33% capacity free on each to handle the loss of one host. But if I have five hosts, or even ten hosts, I only have to leave 20% (or for ten hosts, 10%) free to account for a host loss.Tiered media works the same way, though it feels uber wasted, unless you have a ton of stale/archive data. Our config only included 24 near-line SATA disks (and our tiered upgrade to our existing array only had 16 disks). While that adds 45TB+ to capacity, realistically, those disks can only handle between 1,000 and 2,000 IOPS. Tiering (AO) considers these things, but seems a little under qualified in considering virtual environments. Random seeks are the enemies of SATA, but when AO throws tiny chunks of hundreds of VMs on only two dozen SATA disks (then subtract RAID/parity), it can get bad fast. I’ve found this to especially be the case with OS files. Windows leaves quite a few alone after boot…so AO moves them down. Now run some maintenance reboot those boxes–ouch!The nutshell is that media like SATA/NL have a critical mass quantity (in my opinion) and should be sold in sets of at least 32 disks, or maybe even 64. By scaling out to that (and ignoring the awesomely wasted capacity it gives you), you can safely tier even often-stale-yet-sometimes-hot data (like those OS files) and survive. Of course at that point/qty, SAS/FC might be better :).
  2. Disk space usage alerts: This one is more of an annoyance, but if you use AO, especially with SSD/flash, you’ll find that you either have to waste chunks of raw storage or else suffer through alerts when AO moves data that exceeds 85% of a storage type. For FC/SAS data, I like to keep at least 15% free, so that’s not really an issue, and with NL/SATA, we’d crash from latency if we ever filled up those disks with data. SSD/flash, though, is pricey, so I like to use as much as is safely possible, which means we get e-mail alerts daily if I let AO have its way with the storage. So…I’ve cut back my allocations so that AO can only max out SSD at about 80%. That’s some pretty expensively wasted I/O…or a lot of irrelevant alerts.

All these things said, the array is solid and with a conservative AO config (less NL than you might want, and less SSD than you wish you could quietly use), you’ll be great and back to easy, mostly hands-off management. I have high hopes of the next version of AO and System Reporter, but for now, they are just that–hopes.

Be First to Comment

Leave a Reply