?XavierMessersmith (via web): I'd backup a 5 gig partition using something like this:

 #!/bin/sh
 dd if=/dev/hda1 of=./XtFe1 bs=32M count=32
 bzip2 -v ./XtFe1
 dd if=/dev/hda1 of=./XtFe2 bs=32M count=32 skip=32
 bzip2 -v ./XtFe2
 dd if=/dev/hda1 of=./XtFe3 bs=32M count=32 skip=64
 bzip2 -v ./XtFe3
 dd if=/dev/hda1 of=./XtFe4 bs=32M count=32 skip=96
 bzip2 -v ./XtFe4
 dd if=/dev/hda1 of=./XtFe5 bs=32M count=32 skip=128
 bzip2 -v ./XtFe5

The reasoning for this is that a file level backup of Windows 2000 is near worthless, and being in moderate sized chunks evades file length limitations on various filesystems. (its also nice to get the backups small, hence the ["BZ2ing"])

Its generally recommended to pretreat the partition by filling it up with files generated with a 'dd if=/dev/zero' for the sake of compression efficiency (?BackupAndRecoveryDemo).

Is there any way to further automate/improve this process?


Not having the source partition and destination filename hardcoded or having the time expected estimated would be nice features. As would being able to pass initial chunk size to the script as an argument!

For bzip2 compressed sets (the bzipping is quite optional, with another script) I've found:

 bzcat part1.bz2 part2.bz2 part3.bz2 part4.bz2 part5.bz2 || dd of=/dev/hda1

actually works! But is using bzcat like this safe?

See ?ShellScripting101 for more shell script help.

A good tip: Use lzop to compress the data because it is way faster than gzip or bzip, with something like

 dd if=/dev/hda1  || lzop > data

Also gotta look into netcat for backing up to a seperate system on a network. (for example, when backing up a single partition system with ["Knoppix"])