Migrating VMs between vSphere Datacenters

As part of a project consolidating mission critical services, I am moving a few VMs between vSphere / vCenter datacenters. The keyword here is “datacenters” and for emphasis, they are managed by different vCenter servers operating in linked mode. Because of this setup, the migration isn’t a simple cluster & storage vMotion.

Here’s the process I am following. I hope it helps; if you use another method, feel free to comment.

migrate_services1. Enable SSH on an ESXi host in the source and destination cluster; on the source host, also open SSH outbound on the host firewall

  • In vSphere Client, go to the “Configuration” tab on each host
  • Under “Software” on the left side of the right pane, select “Security Profile”
  • In the top right under “Services”, click “Properties…”
  • Scroll down to “SSH” and click “Options…”
  • Select “Start and stop manually”, then click “Start” and return to the Security Profile page
  • On the source ESXi host, also click “Properties…” under “Firewall”
  • In Firewall Properties, check “SSH Client” and click “OK”


2. Create appropriate directories for the VM on the destination datastore(s)

  • In vSphere Client, browse the destination datastore(s)
  • Create a new directory with the name of VM
  • If secondary disk(s) will be in other datastores, create directories there, too


3. Shutdown the VM being migrated


4. SSH with your favorite client to the source ESXi host (I use Putty); optionally, open an additional session per extra VM disk


5. Secure Copy (scp) the files inside the source datastore(s) to the destination host/datastore(s)

  • Example command: scp /vmfs/volumes/datastore1/server1/*.* [email protected]:/vmfs/volumes/datastore2/server1/
  • If the VM disks are in multiple datastores, use additional SSH sessions to initiate consecutive copies to maximize bandwidth and minimize time (assuming one copy doesn’t fill the pipe)


6. Once the file transfer(s) are complete, edit the VM’s .vmx file to change the disk location(s)

  • SSH to the destination host
  • Change directory to the VM’s primary disk (i.e. cd /vmfs/volumes/datastore2/server1)
  • Note/copy the datastore identifier (the real name of the datastore; looks like “123ab456-c7de89e0-12fab3c4567d”)
  • Edit the VM’s .vmx file using VI (i.e. vi server1.vmx)
  • Find the disk(s) and replace the datastore identifier with the noted/copied one from above (in VI, press “i” to insert text, delete the old datastore, and type/copy the new one)
  • Save the .vmx file (in VI, type “:wq”)


7. In vSphere Client, inventory the VM

  • Browse the destination datastore and navigate to the VM’s directory
  • Find the .vmx file, right-click it, and click “Add to Inventory”


8. Change the VM’s network to the appropriate standard or distributed port group


9. Boot the VM and change its IP address to one in the destination network (unless its the same layer 2 network)


10. Update any relevant DNS, firewall, etc records/rules


11. Remember to stop those services and close the firewall from Step 1!



  1. Phil said:

    Thanks, bookmarked.

    I’m trying to shift over a terabyte of templates between data centres without using intermediate storage and after trying a number of different methods yours has worked out as being the most practical.

    Just crossing my fingers that I don’t get too much competition for bandwidth!

    April 9, 2015
  2. Chris said:

    Glad it’s helped! I hear that on the bandwidth. It wouldn’t be pretty, but if it came down to it, you could try rate limiting at the source perimeter from your management team (i.e. vmk0) IP address. Hopefully you can just let the healthy competition for the pipe fight it out til your templates reach their destination.

    April 10, 2015
  3. atlantonius said:

    Something that should probably be mentioned is that if your vm was thin-disk provisioned before, after the scp it will be thick-disk provisioned. you can change it back to being thin-provisioned by using “vmkfstools -i -d thin” and creating a new thin-provisioned vmdk file.

    Also, imho, it makes more sense to create the new vm on the destination esxi host first. This will create the directory and all the files for you, once you’ve started it at least once. after that is created, then scp the vmdk over and replace whats there.

    April 19, 2017
  4. Daniel Petcher said:

    Way back in the old days of slow links, I learned that it is often significantly faster to pull a large file than it is to push the same file. The reason is that the “verify phase” of writing the file happens locally, rather than over the slow link. Consider logging into the Destination server and initiating the SCP operation from that side of the link.

    June 12, 2018

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.