Clone a machine with dd


Sometimes you want to test software on a machine before installing it on that particular machine. You could use snapshots, or clone the system to perform these actions. For big upgrade such as Debian dist-upgrades, I prefer to use cloned system to test and see what challenges lie ahead.


These notes are setup as work notes/draft so the sequence of actions may not overly logical. Yet they might be useful to someone. fa-smile green

1. Copy disk to a new server using dd


Using dd on a live sysyem, could result in corrupt image files. While a system is running, data will change and this might result in a corrupt image. For instance, if data is written to the filesystem, and dd was in the process of writing only parts of the data to the image file, restoring from this image may lead to unexpected results. If a write is partially underway, you will get a partial write.


It's a good idea to unmount the drive before copying the data. However, this is not always possible. Consider going into single user mode (init 1) and umount the partitions You want to dd or remount them readonly. Be careful as after "init 1" you networking might go down preventing you from connecting to the SSH service.

For instance, to unmount or remount readonly:

umount /dev/<part>
remount -o,ro /dev/<part>
mount -o remount,ro /dev/<part>

There are other ways to clone/copy data such as tar, cpio, rysnc, ... But for these, you need to perform additionals steps. Here we'll use dd to do the job.

1.1 From the local machine

We want to connect to the source machine via ssh keypair. Generate a ssh key pair:

name: id_user_rsa

Copy the public key to the server:

ssh-copy-id user@server

Adjust local .ssh/config:

vi /etc/ssh/sshd_config
GatewayPorts yes

If you use passwords, you want to check your ssh_config file on any system where you want to connect from:

vi /etc/ssh/sshd_config
PasswordAuthentication yes

Restart sshd:

systemctl restart ssh

Set the correct key in ssh:

vi ~/.ssh/config
Host server
    Hostname serverip
    Port 22
    User <user>
    IdentityFile /home/user/.ssh/id_user_rsa

Test if a login via key works:

ssh server
ssh user@server

If this works, disconnect and we'll make the connection with a reverse tunnel:

ssh -R 8899:<localip>:22 user@service2


8899: port number you choose to be opened on the server to connect to the local machine
22: ssh port on the local machine
<localip>: ip of the local machine

If you haven't set GatewayPorts yes as described above, you could use these commands:

ssh -R 8899:<localip>:22 user@server

ssh -g -L 8900:localhost:8899 user@localmachine

1.2 Remote server machine

From the remote server, we should now be able to connect to the local machine via the reverse ssh tunnel. Connect to the local reverse tunnel:

ssh -p 8899 user@localhost


8899: the port openend on the server side by the first ssh command from the localmachine
user: a user account on the localmachine

See the disk layout to know what you partitions/disk you need to copy. Use cfdisk / parted / fdisk. In this example, we are going to clone a VMware machine:

Model: VMware Virtual disk (scsi)
    Disk /dev/sda: 37,6GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags:

    Number  Start   End     Size    Type      File system  Flags
     1      1049kB  256MB   255MB   primary   ext2         boot
     2      257MB   21,5GB  21,2GB  extended
     5      257MB   21,5GB  21,2GB  logical                lvm
     3      21,5GB  37,6GB  16,1GB  primary                lvm

Execute the copy command, in this case dd and send the output from dd to the local machine. In other words, we'll send the disk data over ssh to the local machine for storage. It's a good idea to run these commands from a screen session. Read up on screen as it's a very handy tool:

screen -S copydisk

If the source system is in single user mode and/or the partitions are unmounted or in read-only mode, start the dd command. You should be able to connect back to the local machine to store the data:

dd if=/dev/sda1 conv=noerror,sync bs=64K status=progress | ssh -p 8899 user@localhost dd of=/storage/boot.img

The result may look like this:

497664+0 records read
497664+0 records written
254803968 bytes (255 MB) copied, 101,864 s, 2,5 MB/s
497664+0 records read
497664+0 records written
254803968 byte (255 MB, 243 MiB) copied, 98,5462 s, 2,6 MB/s

After you've copied the disks, we'll try and see if we can use that data to create a Virtualbox machine. It might be useful to test the upgrade process, test software, P2V, or any other scenario. This method should also work to create a VM from a physical machine.

If you want progress support, gzip and split the data:

dd conv=noerror,sync bs=32768 if=/dev/sda | pv -c | gzip | ssh -p 8899 user@localhost "split -b 1048m -d - backup-`hostname -s`.img.gz"

Or if you don't want to split the files:

dd conv=noerror,sync bs=32768 if=/dev/sda | pv -c | gzip | ssh -p 8899 user@localhost dd of=/storage/boot.img

Depending on your use case, you can transfer the data of a parition or a whole disk.

1.3 Verify the data

After the data is transfered to the local machine, you can mount the img files to verify them. As root, create a directory and mount the images:

cd /mnt
mkdir diskverify
mount -o loop disk.img /mnt/diskverify
ls -la /mnt/diskverify

If the parition you try to mount is an LVM partition, mounting it will need additional work. Skip the next lines if the paritions aren't LVM partitions. To find the UUID and other info, use the file command:

file disk.img

Next, we will try to mount the image on a loopback device:

losetup /dev/loop0 disk.img

Let's see what we get on the physical devices:


Load the volume group and the logical volumes:

vgchange -ay <volumgroup> (you can leave the volumegroup out)

Mount the volumes:

cd /mnt
mkdir lv_root
mount -o ro /dev/vg0/root lv_root

After checking the volumes, unmount the logical volume, remove the volumegroup and remove the loopback device:

umount /mnt/old/
vgchange -a n <volumgroup>
losetup -d /dev/loop0

1.4. Copying to a new disk or server

It's now possible to transfer these images to a new server.

After creating the machine, you need to create the same partitions or if it's a full disk you want to transfer, you need to make a disk of the same size or bigger.

Short steps:

  • Create a new Virtual machine
  • Start the new system from a rescue cd such as systemrescuecd
  • Partition the harddisk with cfdisk, fdisk, parted, ...
  • Adjust the settings of the ssh server in the rescue environment.

If these steps don't work, it might be necessary to do a basic install.

Next copy the data over from your local system to the new disk accessible via the rescuecd/livecd's ssh:

dd if=disk.img bs=64K status=progress | ssh root@newserver "dd of=/dev/sda"
or as user
dd if=disk.img bs=64K status=progress | ssh -t user@newserver "dd of=/dev/sda"

After copying, also verify the disk/partitions.


If you have copied partitions instead of the full disk, you new machine won't boot. Stay in the rescue/live environment and fix the grub2 install.

If you have put the partitions in the same paritions (partition numbers), you could first save the mbr info from the server andd install it on the new disk


If you want to use the full MBR (partition table and boot code) use 512 bytes. 446 bytes of the Master Boot Code, 64 bytes of the partition table and a 2 byte disk signature. If you only want to get the bootcode, use the first 446 bytes.

Copy the MBR. On this particular server, there was a banner displayed when logging in. I disabled it for this operation (look for a Banner statement in your sshd config):

ssh user@server "sudo dd if=/dev/sda bs=512 count=1 2>/dev/null" | dd of=/storage/mbr.img

Restore the MBR on the new server from the rescue cd:

dd if=mbr.img bs=512 count=1 | ssh root@rescue-ip "dd of=/dev/sda"

Reboot the rescuesystem, and try to boot from the first harddisk.

If this fails, we'll need to reinstall grub. Or you could do a grub-install from the rescue cd.

On the rescuesystem:

mkdir /mnt/new
lsblk # check if the new /dev/sda is there

Get mbr on the server, then via sftp on the rescuesystem On the source server, we'll get the MBR and the extended partitions:

# Backup MBR
dd if=/dev/sda bs=512 count=1 of=mbr.img

# Backup entries of the extended partitions
sfdisk -d /dev/sda > backup-sda.disk

Get the files on the local system:

sftp user@server
get mbr.img
get backup-sda.disk

Write mbr and extended partitions to the new server from the local system:

dd if=mbr.img bs=512 count=1 | ssh root@rescue-ip "dd of=/dev/sda"

sftp root@rescue-ip
put backup-sda.disk

On the rescue system:

cfdisk /dev/sda < backup-sda.disk

If the extended paritions aren't made, create them manually via cfdisk/fdisk/... and write the changes to disk. An example of a disk with an extended partition:

                                              Disk: /dev/sda
                            Size: 35 GiB, 37580963840 bytes, 73400320 sectors
                                    Label: dos, identifier: x

Device            Boot                Start            End        Sectors        Size       Id Type
/dev/sda1         *                    2048         499711         497664        243M       83 Linux
/dev/sda2                            501758       41940991       41439234       19,8G        5 Extended
└─/dev/sda5                          501760       41940991       41439232       19,8G       8e Linux LVM
/dev/sda3                          41940992       73400319       31459328         15G       8e Linux LVM

Now the disk layout is similar to the source system, we copy the data to the related partitions. From the local system where the partition date is stored:

dd if=part1.img bs=64K status=progress | ssh root@rescuecd "dd of=/dev/sda1"
dd if=part3.img bs=64K status=progress | ssh root@rescuecd "dd of=/dev/sda3"
dd if=part5.img bs=64K status=progress | ssh root@rescuecd "dd of=/dev/sda5"

After this operation, try to reboot from the new system again. Still doens't boot...

1.5. Make the system bootable


If you dd a whole disk, it should boot just fine. If you worked with partitions, you will need some additional steps.

The rescue cd probably has loaded the partitions with the vg group & lvm partitions. If this is not the case, mount the device or image with the lvm partitions on. Let's see what we get on the physical devices:


Load the volume group and the logical volumes:

vgchange -ay <volumgroup> (you can leave the volumegroup out)

In this case, there were lvm paritions for root, home, var, tmp and swap. Make the mountpoints in a new directory "new" in /mnt:

mkdir -p /mnt/new/boot /mnt/new/var /mnt/new/tmp /mnt/new/home

Mount the volumes:

cd /mnt
mount /dev/vg0/root new

cd new
mount /dev/sda1 boot

mount /dev/vg0/var var
mount /dev/vg0/tmp tmp
mount /dev/vg0/home home

Bind dev, proc and sys:

mount --bind /dev /mnt/new/dev
mount --bind /proc /mnt/new/proc
mount --bind /sys /mnt/new/sys

Chroot into the mounted partitions:

chroot /mnt/new /bin/bash

Fix grub:

grub-install /dev/sda

Exit the chroot and umount the partitions, uncouple the cdrom and reboot:

umount dev proc sys
umount var tmp home boot
cd ..
umount new


Before rebooting, check the filesystem.

Since the filesystems are umounted, now is the time:

e2fsck /dev/vg0/root

1.6. Filesystem

If the receiving disk is larger, you may want to expand the filesystem in order to use the extra diskspace. To find out if this is the case, on the new system, execute these command and compare:

df -h

If the sizes don't match, and you want to expand the filesystem:

e2fsck -f /dev/sda
resize2fs /dev/sda

1.7. Other considerations

If this is a new server, you'll need to prepare the disk to boot (grub, MBR) and you will need a swap partition as well.

1.7.1. Save disk space

To save space, you can compress data produced by dd with gzip:

dd if=/dev/sda | gzip -c  > /storage/image.img

You can restore your disk with:

gunzip -c /storage/image.img.gz | dd of=/dev/sda

To save even more space, defragment the drive/partition you wish to clone beforehand (if appropriate), then zero-out all the remaining unused space, making it easier for gzip to compress. The zeroing can be done in several ways.

  1. By creating a big file of zeroes
  • Create the file

    mkdir /mnt/sda mount /dev/sda /mnt/sda dd if=/dev/zero of=/mnt/sda/zero sync

  • Wait a bit, dd will eventually fail with a "disk full" message, then:

    rm /mnt/sda/zero
    umount /mnt/sda
    dd if=/dev/sda | gzip -c  > /storage/image.img
  1. By installing a tool to do this.
  • sfill:
    • apt-get install secure-delete
    • sfill -f -v -llz /mnt/<mounted_partition>
  • zerofree
    • apt-get install zerofree
    • zerofree

1.7.2. dd progress

If the status=progress isn't supported, there is another way to report the progress. You can get the dd process to report the progress by sending it a signal with the kill command:

dd if=/dev/sda of=/storage/image.img &
kill -SIGUSR1 <procesid>

1.7.3. Pull image to local from server

To pull the image on the local server:

ssh user@server "sudo dd if=/dev/sdax conv=noerror,sync bs=64K status=progress" | dd of=/storage/test.img

This command didn't work because of the popup from sudo. Adjust sudo settings on the server by creating an additional command alias to allow dd without passwd:

Cmnd_Alias DD=/bin/dd

After correcting the sudoers file, and removing the status, we can run this command:

ssh user@server "sudo dd if=/dev/sdax conv=noerror,sync bs=64K" | dd of=/storage/test.img

See progress:

kill -USR1 $pid; sleep 1; kill $pid