"dd" on Google image search :)

"dd" on Google image search :)

In one of my previous articles I wrote how to use a couple of fancy tools included by default on Ubuntu to clone and restore an Windows, NTFS-based, broken hdd:

http://www.pwrusr.com/system-administration/how-to-make-an-ntfs-image-of-a-faulty-windows-hdd-with-ntfsclone-and-ubuntu

Today I will show you how I performed a similar operations by relying on CentOS 6.2 live and standard “default” tools.

The cloning procedure.

In this scenario I booted the faulty notebook off the live CentOS 6.2 CD and transferred the faulty hdd image to an SMB network share.

I found out the HP Laptop's BIOS was accessible by pressing the “F10”-key. Although I was able to correctly setup the BIOS boot sequence from the CD first, the System was unable to boot.

So, after mucking around a little bit more, I also discovered the Boot Menu key was “F9”.

This allowed my HP laptop to boot off the CentOS 6.2 live CD.

Then, on a terminal, I typed the following:

sudo su
mkdir /mnt/tmp-smb-share
mount -t cifs //nas01/temp /mnt/tmp-smb-share -o username=rwuser
dd if=/dev/sda bs=16M conv=noerror,notrunc,sync | gzip -c > /mnt/tmp-smb-share/faulty-hp-hdd-raw-img.zip

The previous two commands are pretty self-explanatory.

The third “mount –t cifs //[..] –o username=” command allowed me to mount a remote SMB share hosted by a NAS.

Since the space on my NAS is limited, I choose to compress the dd image by piping the output to “gzip –c” (refer to the fourth command).

The advantage of so doing is easily explained.

We all know dd performs a raw copy of whatever you use as input (“if=/dev/sda”).

Say you have a 750GB hard disk drive with a total space usage of 126GB. The remaining (unallocated) space is either zeroes or random “ignored” data which amounts to ~600GB.

Since dd has no knowledge of the underlying filesystem whatsoever, if I launched the traditional dd command by specifying an “of=[..]”, I would’ve required ~750GB of space.

A clever way to counter this “limitation” is to pipe the dd output to gzip.

This way, the remaining ~600GB of unallocated space will be highly compressed!

Supposedly, you’d end up with a compressed raw image consuming ~127GB of NAS space (while the real uncompressed size would still be ~750GB).

The restore procedure.

The restore procedure involves replacing the faulty hdd with a new one, booting the system off the same CentOS 6.2 live CD & running the following on a terminal:

sudo su
mkdir /mnt/tmp-smb-share
mount -t cifs //nas01/temp /mnt/tmp-smb-share -o username=rwuser
gunzip –c /mnt/tmp-smb-share/faulty-hp-hdd-raw-img.zip | dd of=/dev/sda bs=16M conv=noerror,notrunc,sync

The previous three commands are basically the same.

The fourth command will decompress the image off the NAS and pipe its output straight to the new hdd.

Please note: there should be no downsides IF the new hdd is bigger than the original (say you replace a faulty 750GB with a new 1TB one).

In this case, you’ll end up with unallocated space at the end of the ~750GB Boundary.

To “extend” the underlying filesystem to use the unallocated area, it might just be a matter of running gparted (from Linux) or diskpart (from Windows).

SRC:

http://serverfault.com/questions/4906/using-dd-for-disk-cloning

Senior Professional Network and Computer Systems Engineer during work hours and father when home.

Andrea strives to deliver outstanding customer service and heaps of love towards his family.

In this Ad-sponsored space, Andrea shares his quest for "ultimate" IT knowledge, meticulously brought to you in an easy to read format.

Rate this post