How To Sanitize An EMC Avamar Backup Grid

This week it was finally time to put our old EMC Avamar backup/DR grids out to pasture, and while I had removed most of the configurations from them already, I still needed to sanitize the disks. Unfortunately, a quick search of and Google revealed that “securedelete” operations on Avamar grids require EMC Professional Services engagements. Huh? I want to throw the thing away, not spend more money on it…

A few folks offered up re-initializing the RAID volumes on the disks as one way to prepare for decommissioning. That’s definitely one option. Another is to wipe the data from within, which has much of the same result, but provides a degree of detailed assurances that the PowerEdge RAID Controller doesn’t give (unless your PERC can be configured for repeated passes of random data).

Totally a side note: when I started this, I had the misconception that this method would preserve the OS and allow a second-hand user to redeploy it without returning to the EMC mothership. As you’ll note below, one of the paths we wipe is the location of /home and the rest of the OS. :x

Under the covers, Avamar is stripped down Linux ( GNU/Linux, as of Avamar 7.1), so that provided the starting point. The one I chose and that I have running across 10 storage nodes and 30 PuTTY windows is “shred”.

Shred is as simple as it sounds. It shreds the target disk as many times as you want it. So for Avamar, how many disks is that?


The answer, at least for the Avamar data itself (backups, in particular), is three per node. In our setup, those are:

  • /dev/sda3
  • /dev/sdb1
  • /dev/sdc1

I verified on each storage node, but found that those three were always the same and mounted to /data01, /data02, and /data03, respectively.

Considering that each of those disks represents glorious near-line slowness, it’s in the best interest of yourself and Father Time to duplicate that PuTTY session for each disk and start shredding them in parallel.

sudo shred -v -n3 /dev/sda3
sudo shred -v -n3 /dev/sdb1
sudo shred -v -n3 /dev/sdc1

With each of those running in a separate session, you can move on and repeat the process on each of the other storage nodes in the grid. Based on my observed performance so far, it looks like each pass will take five days to complete (the shot of one node below is after about 27 hours of progress. Hopefully your environment can suffice with three or less passes, if you choose this method. Otherwise, you’ll have a long road ahead of Avamar residency.



  1. Deven said:

    Hey Chris, I have a question. When you wiped the OS directory how were you still able to continue shredding? Wouldn’t wiping the OS halt everything else?

    June 6, 2016
    • Chris said:

      The shred is actually running in memory so it allows you to cut the branch out from under you. It’s the last thing you’ll do as you’ll be in a ghost directory when it completes (like the train station in The Matrix ;).

      June 6, 2016
  2. What if the goal is to keep the OS and just delete all the backups stored in the grid? You desire the system being operational so someone else can use it to backup and restore stuff. You just dont want your backups being recovered from the system.

    I assume just using the gui and searching for every backup and deleting it isnt good enough because I have seen blogs on recovering deleted backups from Avamar.

    This would be preferential if you were donating it for example to your local school district.

    April 2, 2017
    • Chris said:

      Good thoughts, Sean. Mine was going back on lease, so operability wasn’t key, but time required and security assured were. ‘Probably have to find an EMC field guide on returning it to factory defaults to achieve your goal.

      April 2, 2017

Leave a Reply