How To Sanitize An EMC Avamar Backup Grid

This week it was finally time to put our old EMC Avamar backup/DR grids out to pasture, and while I had removed most of the configurations from them already, I still needed to sanitize the disks. Unfortunately, a quick search of support.emc.com and Google revealed that “securedelete” operations on Avamar grids require EMC Professional Services engagements. Huh? I want to throw the thing away, not spend more money on it…

A few folks offered up re-initializing the RAID volumes on the disks as one way to prepare for decommissioning. That’s definitely one option. Another is to wipe the data from within, which has much of the same result, but provides a degree of detailed assurances that the PowerEdge RAID Controller doesn’t give (unless your PERC can be configured for repeated passes of random data).

Totally a side note: when I started this, I had the misconception that this method would preserve the OS and allow a second-hand user to redeploy it without returning to the EMC mothership. As you’ll note below, one of the paths we wipe is the location of /home and the rest of the OS. :x

Under the covers, Avamar is stripped down Linux (2.6.32.59-0.17-default GNU/Linux, as of Avamar 7.1), so that provided the starting point. The one I chose and that I have running across 10 storage nodes and 30 PuTTY windows is “shred”.

Shred is as simple as it sounds. It shreds the target disk as many times as you want it. So for Avamar, how many disks is that?

avamar_shred_df

The answer, at least for the Avamar data itself (backups, in particular), is three per node. In our setup, those are:

  • /dev/sda3
  • /dev/sdb1
  • /dev/sdc1

I verified on each storage node, but found that those three were always the same and mounted to /data01, /data02, and /data03, respectively.

Considering that each of those disks represents glorious near-line slowness, it’s in the best interest of yourself and Father Time to duplicate that PuTTY session for each disk and start shredding them in parallel.

sudo shred -v -n3 /dev/sda3
sudo shred -v -n3 /dev/sdb1
sudo shred -v -n3 /dev/sdc1

With each of those running in a separate session, you can move on and repeat the process on each of the other storage nodes in the grid. Based on my observed performance so far, it looks like each pass will take five days to complete (the shot of one node below is after about 27 hours of progress. Hopefully your environment can suffice with three or less passes, if you choose this method. Otherwise, you’ll have a long road ahead of Avamar residency.

avamar_shred_progress

2 Comments

  1. Deven said:

    Hey Chris, I have a question. When you wiped the OS directory how were you still able to continue shredding? Wouldn’t wiping the OS halt everything else?

    June 6, 2016
    Reply
    • Chris said:

      The shred is actually running in memory so it allows you to cut the branch out from under you. It’s the last thing you’ll do as you’ll be in a ghost directory when it completes (like the train station in The Matrix ;).

      June 6, 2016
      Reply

Leave a Reply