?XavierMessersmith (via web): I'd backup a 5 gig partition using something like this:

   1  #!/bin/sh
   2  dd if=/dev/hda1 of=./XtFe1 bs=32M count=32
   3  bzip2 -v ./XtFe1
   4  dd if=/dev/hda1 of=./XtFe2 bs=32M count=32 skip=32
   5  bzip2 -v ./XtFe2
   6  dd if=/dev/hda1 of=./XtFe3 bs=32M count=32 skip=64
   7  bzip2 -v ./XtFe3
   8  dd if=/dev/hda1 of=./XtFe4 bs=32M count=32 skip=96
   9  bzip2 -v ./XtFe4
  10  dd if=/dev/hda1 of=./XtFe5 bs=32M count=32 skip=128
  11  bzip2 -v ./XtFe5

The reasoning for this is that a file level backup of Windows 2000 is near worthless, and being in moderate sized chunks evades file length limitations on various filesystems. (it is also nice to get the backups small, hence the optional compression with bzip2)

Is there any way to further automate/improve this process?


Not having the source partition and destination filename hardcoded or having the time expected estimated would be nice features. As would being able to pass initial chunk size to the script as an argument!

It's generally recommended to pretreat the partition by filling it up with files generated with a 'dd if=/dev/zero' for the sake of compression efficiency (?BackupAndRecoveryDemo); or with the tools:

Use lzop to compress the data because it is way faster than gzip or bzip2, with something like:

 dd if=/dev/hda1 | lzop > data

For bzip2 compressed sets the following works:

 bzcat part1.bz2 part2.bz2 part3.bz2 part4.bz2 part5.bz2 | dd of=/dev/hda1

Also gotta look into netcat for backing up to a separate system on a network. (for example, when backing up a single partition system with Knoppix)