This week it was finally time to put our old EMC Avamar backup/DR grids out to pasture, and while I had removed most of the configurations from them already, I still needed to sanitize the disks. Unfortunately, a quick search of support.emc.com and Google revealed that “securedelete” operations on Avamar grids require EMC Professional Services engagements. Huh? I want to throw the thing away, not spend more money on it…
A few folks offered up re-initializing the RAID volumes on the disks as one way to prepare for decommissioning. That’s definitely one option. Another is to wipe the data from within, which has much of the same result, but provides a degree of detailed assurances that the PowerEdge RAID Controller doesn’t give (unless your PERC can be configured for repeated passes of random data).
Totally a side note: when I started this, I had the misconception that this method would preserve the OS and allow a second-hand user to redeploy it without returning to the EMC mothership. As you’ll note below, one of the paths we wipe is the location of /home and the rest of the OS. :x
Under the covers, Avamar is stripped down Linux (22.214.171.124-0.17-default GNU/Linux, as of Avamar 7.1), so that provided the starting point. The one I chose and that I have running across 10 storage nodes and 30 PuTTY windows is “shred”.
Shred is as simple as it sounds. It shreds the target disk as many times as you want it. So for Avamar, how many disks is that?
The answer, at least for the Avamar data itself (backups, in particular), is three per node. In our setup, those are:
I verified on each storage node, but found that those three were always the same and mounted to /data01, /data02, and /data03, respectively.
Considering that each of those disks represents glorious near-line slowness, it’s in the best interest of yourself and Father Time to duplicate that PuTTY session for each disk and start shredding them in parallel.
sudo shred -v -n3 /dev/sda3
sudo shred -v -n3 /dev/sdb1
sudo shred -v -n3 /dev/sdc1
With each of those running in a separate session, you can move on and repeat the process on each of the other storage nodes in the grid. Based on my observed performance so far, it looks like each pass will take five days to complete (the shot of one node below is after about 27 hours of progress. Hopefully your environment can suffice with three or less passes, if you choose this method. Otherwise, you’ll have a long road ahead of Avamar residency.