|
Size: 54245
Comment:
|
Size: 54697
Comment:
|
| Deletions are marked like this. | Additions are marked like this. |
| Line 735: | Line 735: |
| === The VCS command comparison === || Comparison of VCS commands. || || || || |
=== Native VCS commands === Here is an oversimplified comparison of native VCS commands to provide the big picture. The typical command sequence may require options and arguments. || Comparison of native VCS commands. || || || || |
| Line 743: | Line 745: |
| || {{{cvs add}}} || {{{svn add}}} || {{{git-add .}}} || add file(s) in the working tree to the VCS || || {{{cvs rm}}} || {{{svn rm}}} || {{{git-rm}}} || remove file(s) in working tree from the VCS || |
|
| Line 744: | Line 748: |
| || - || - || {{{git-commit}}} || commit changes to the local repository || | || - || - || {{{git-commit -a}}} || commit changes to the local repository || |
| Line 746: | Line 750: |
| || {{{cvs add}}} || {{{svn add}}} || {{{git-add}}} || add file(s) in the working tree to the VCS || || {{{cvs rm}}} || {{{svn rm}}} || {{{git-rm}}} || remove file(s) in working tree from the VCS || |
|
| Line 752: | Line 754: |
| <!> The above table is intentionally oversimplified to provide the big picture. The typical command sequence may require options and arguments, e.g., "{{{git add .}}}", "{{{git commit -a}}}", and "{{{git push --tags}}}". The "{{{git-command}}}" may be typed as "{{{git command}}}". | (!) The "{{{git-command}}}" may be typed as "{{{git command}}}". ## === Git commands to work directly with different VCS repositories === ## Here is an oversimplified summary of commands to provide the big picture. ## The typical command sequence may require options and arguments. ## ## Nice to have table here. ## {i} The Git can work directly with different VCS repositories such as ones provided by CVS and Subversion and provides the local repository for local changes with the {{{git-cvs}}} and {{{git-svn}}} packages. See [http://www.kernel.org/pub/software/scm/git/docs/cvs-migration.html git for CVS users], [http://live.gnome.org/GitForGnomeDevelopers Git for GNOME developers] and @{@git@}@. ## Following URLs are interesting. ## [http://www.mantisbt.org/wiki/doku.php/mantisbt:git_svn Using Git with SVN] ## [http://andy.delcambre.com/2008/3/4/git-svn-workflow Git SVN Workflow] ## [http://www.gnome.org/~federico/misc/git-cheat-sheet.txt GIT for mortals] ## [http://kerneltrap.org/mailarchive/git/2007/6/26/250068 GIT + CVS workflow query] |
| Line 1000: | Line 1019: |
| ## Following URLs are interesting. ## http://andy.delcambre.com/2008/3/4/git-svn-workflow {i} Subversion repository can alternatively be accessed and updated by the {{{git-svn}}}(1) command (see [http://www.mantisbt.org/wiki/doku.php/mantisbt:git_svn Using Git with SVN] and @{@git@}@). |
|
| Line 1141: | Line 1154: |
| The Git can do everything for both local and remote source code management. This means that you can record the source code changes without having the network connectivity to the remote repository. The Git can be used to provide the local repository for recording changes along with other remote source code management tools such as CVS and Subversion which provide the access to the remote repository. | The Git can do everything for both local and remote source code management. This means that you can record the source code changes without having the network connectivity to the remote repository. |
Do not use Edit(GUI) button.
?TableOfContents(4)
Copyright 2007, 2008 Osamu Aoki GPL, (Please agree to GPL, GPL2, and any version of GPL which is compatible with DSFG if you update any part of wiki page)
I welcome your contributions to update the wiki pages. You must follow these rules:
Do not use Edit(GUI) button of MoinMoin.
- You can update anytime for:
- grammar errors
- spelling errors
- moved URL location
- package name transition adjustment (emacs23 etc.)
- clearly broken script.
- Before updating real contents:
Read "[http://wiki.debian.org/DebianReference/Test Guide for contributing to Debian Reference]".
Data management
Sharing, copying, and archiving
The security of the data and its controlled sharing have several aspects:
- the creation of data archive,
- the remote storage access,
- the duplication,
- the tracking of the modification history,
- the facilitation of data sharing,
- the prevention of unauthorized file access, and
- the detection of unauthorized file modification.
These can be realized by using some combination of:
- the archive and compression tools,
- the copy and synchronization tools,
- the network file system,
the removable storage media,lrwx
1 osamu osamu 64 2008-05-08 23:49 0 -> /dev/pts/3 lrwx
1 osamu osamu 64 2008-05-08 23:49 1 -> /dev/pts/3 lrwx
1 osamu osamu 64 2008-05-08 23:49 2 -> /dev/pts/3 lr-x
1 osamu osamu 64 2008-05-08 23:49 3 -> pipe:[46416] l-wx
1 osamu osamu 64 2008-05-08 23:49 4 -> pipe:[46416] lrwx
1 osamu osamu 64 2008-05-08 23:49 5 -> socket:[46417] lrwx
1 osamu osamu 64 2008-05-08 23:49 7 -> socket:[46424] lrwx
1 osamu osamu 64 2008-05-08 23:49 9 -> /home/osamu/.ddd.swp ls -l
- the secure shell,
- the authentication system,
- the version control system tools, and
- hash and cryptographic encryption tools.
Archive and compression tools
Here are a summary of archive and compression tools available on the Debian system:
List of archive and compression tools. |
1 |
2 |
|
package |
popcon |
comment |
extension |
tar |
29915 |
tar: the standard archiver (de facto) |
.tar |
cpio |
15940 |
cpio: Unix System V style archiver, use with find command |
.cpio |
binutils |
15167 |
ar: archiver for the creation of static libraries |
.ar |
fastjar |
2307 |
fastjar: archiver for Java (zip like) |
.jar |
pax |
530 |
pax: new POSIX standard archiver, compromise between tar and cpio |
.pax |
afio |
308 |
afio: extended cpio with per-file compression etc. |
.afio |
gzip |
38002 |
The GNU compression utility, LZ77 compression (de facto) |
.gz |
bzip2 |
25807 |
The bzip2 compression utility, Burrows-Wheeler block-sorting compression |
.bz2 |
zip |
|
InfoZIP: DOS archive and compression tool |
.zip |
unzip |
|
InfoZIP: DOS unarchive and decompression tool |
.zip |
The gzipped .tar archive sometimes uses the file extension .tgz.
The cp, scp and tar may have some limitation for special files. The cpio and afio are most versatile.
The cpio and afio commands are designed to be used with the find and other commands and suitable for creating backup scripts since the file selection part of the script can be tested independently.
afio compresses each file in the archive. This makes afio to be much safer for the file corruption than the globally compressed tar or cpio archives and to be the best archive engine for the backup script.
Internal structure of OpenOffice data files are .jar file.
Copy and synchronization tools
Here are a summary of simple copy and backup tools available on the Debian system:
List of copy and synchronization tools. |
1 |
2 |
|
package |
popcon |
tool |
function |
coreutils |
37945 |
GNU cp |
Locally copy files and directories ("-a" for recursive). |
openssh-client |
29037 |
scp |
Remotely copy files and directories (client). "-r" for recursive. |
openssh-server |
22918 |
sshd |
Remotely copy files and directories (remote server). |
rsync |
6383 |
Rsync |
1-way remote synchronization and backup. |
unison |
634 |
Unison |
2-way remote synchronization and backup. |
pdumpfs |
51 |
pdumpfs |
Daily local backup using hardlinks, similar to Plan9's dumpfs. |
Execution of the bkup script mentioned in @{@acopyscriptforthedatabackup@}@ with the "-gl" option under cron(8) should provide very similar functionality as pdumpfs for the static data archive.
Version control system (VCS) tools in @{@listofversioncontrolsystemtools@}@ can function as the multi-way copy and synchronization tools.
Idioms for the archive
Here are several ways to archive and unarchive the entire contents of the directory /source.
With GNU tar:
$ tar cvzf archive.tar.gz /source $ tar xvzf archive.tar.gz
With cpio:
$ find /source -xdev -print0 | cpio -ov --null > archive.cpio; gzip archive.cpio $ zcat archive.cpio.gz | cpio -i
With afio:
$ find /source -xdev -print0 | afio -ovZ0 archive.afio $ afio -ivZ archive.afio
Idioms for the copy
Here are several ways to copy the entire contents of the directory
from /source to /dest , and
from /source at local to /dest at user@host.dom.
With GNU cp and openSSH scp:
# cp -a /source /dest # scp -pr /source user@host.dom:/dest
With GNU tar:
# (cd /source && tar cf - . ) | (cd /dest && tar xvfp - ) # (cd /source && tar cf - . ) | ssh user@host.dom '(cd /dest && tar xvfp - )'
With cpio:
# cd /source; find . -print0 | cpio -pvdm --null --sparse /dest
With afio:
# cd /source; find . -print0 | afio -pv0a /dest
The scp command can even copy files between remote hosts:
# scp -pr user1@host1.dom:/source user2@host2.dom:/dest
Backup and recovery
We all know that computers fail sometime or human errors cause system and data damages. Backup and recovery operations are the essential part of successful system administration. All possible failure modes will hit you some day.
There are 3 key factors which determine actual backup and recovery policy:
- Knowing what to buckup and recover.
Data files directly created by you: data in /$HOME/
Data files created by applications used by you: data in /var/ (except /var/cache/, /var/run/, and /var/tmp/).
System configuration files: data in /etc/
Local softwares: data in /usr/local/ or /opt/
- System installation information: a memo in plain text on key steps (partition, ...).
- Proven set of data: experimenting with recovery operations in advance.
- Knowing how to backup and recover.
- Secure storage of data: protection from overwrite and system failure.
- Frequent backup: scheduled backup.
- Redundant backup: data mirroring.
- Fool proof process: easy single command backup.
- Assessing risks and costs involved.
- Failure mode and their possibility.
- Value of data when lost.
- Required resources for backup: human, hardware, software, ...
As for secure storage of data, data should be at least on different disk partitions preferably on different disks and machines to withstand the filesystem corruption. Important data are best stored on a write-once media such as CD/DVD-R to prevent overwrite accidents. (See @{@thebinarydata@}@ for how to write to the storage media from the shell commandline. Gnome desktop GUI environment gives you easy access via menu: "Places->CD/DVD Creator".)
You may wish to stop some application daemons such as MTA (see @{@mta@}@) while backing up data.
You should pay extra care to the backup and restoration of identity related data files such as /etc/ssh/ssh_host_dsa_key, /etc/ssh/ssh_host_rsa_key, $HOME/.gnupg/*, $HOME/.ssh/*, /etc/passwd, /etc/shadow, /etc/fetchmailrc, popularity-contest.conf, /etc/ppp/pap-secrets, and /etc/exim4/passwd.client. Some of these data can not be regenerated by entering the same input string to the system.
If you run a cron job as a user process, you need to restart it after the system restoration. See @{@scheduletasksregularly@}@ for cron(8) and crontab(1).
An archive script for the system backup
For a personal Debian desktop system running unstable suite, I find not much reason to backup the whole system. I only backup important data to CD/DVD. Here is an archive script for such backup.
# Copyright (C) 2007 Osamu Aoki <osamu@debian.org>, Public Domain
DATE=$(date --utc +"%Y%m%d-%H%M")
[ -d /var/backups ] || mkdir -p /var/backups
dpkg --get-selections \* >/var/backups/dpkg-selections.list
# use dpkg --set-selection command for recovery.
# debconf selection backup.
debconf-get-selections > /var/backups/debconf-selections
# Use debconf-set-selections for recovery.
# define files to back up and feed their names to the stdin of afio
find /etc /home /var/lib/dpkg /var/backups /var/lib/cvs \
-xdev \
-type d \( -name 'Cache' -o -name 'Mail' -o -name 'public_html' \) -prune -o \
-print0 | afio -Z -0 -o BU$DATE.afio
touch /last-backup.stampThis is meant to be a command example.
- Please read this script carefully.
- Edit this script to your needs before you execute this.
- This is meant to be executed from root.
Add directories to back up to "find ..." if you have important data elesewhere. (www, mail, subversion, ...)
Instead of "find ...", use "find -cnewer /last-backup.stamp ..." to narrow down the scope of backup to differential backup.
The backup file may be saved to the remote host using scp or rsync.
- I use Gnome desktop GUI for creating and writing CD/DVD image. (See @{@shellscriptexamplewithzenity@}@ for extra redundancy.)
- Keep it simple!
A copy script for the data backup
For the set of data under a directory tree, the copy with "cp -a" provides the normal backup.
For the set of large non-overwritten static data under a directory tree such as the data under the /var/cache/apt/packages/ directory, hardlinks with "cp -al" provide an alternative to the normal backup with efficient use of the disk space.
Here is a copy script, which I named as bkup, for the data backup. This script copies all (non-VCS) files under the current directory to the dated directory on the parent directory or on a remote host.
# Copyright (C) 2007-2008 Osamu Aoki <osamu@debian.org>, Public Domain
function fdot(){ find . -type d \( -iname ".?*" -o -iname "CVS" \) -prune -o -print0;}
function fall(){ find . -print0;}
function mkdircd(){ mkdir -p "$1";chmod 700 "$1";cd "$1">/dev/null;}
FIND="fdot";OPT="-a";MODE="CPIOP";HOST="localhost";EXTP="$(hostname -f)"
BKUP="$(basename $(pwd)).bkup";TIME="$(date +%Y%m%d-%H%M%S)";BU="$BKUP/$TIME"
while getopts gcCsStrlLaAxe:h:T f; do case $f in
g) MODE="GNUCP";; # cp (GNU)
c) MODE="CPIOP";; # cpio -p
C) MODE="CPIOI";; # cpio -i
s) MODE="CPIOSSH";; # cpio/ssh
S) MODE="AFIOSSH";; # afio/ssh
t) MODE="TARSSH";; # tar/ssh
r) MODE="RSYNCSSH";; # rsync/ssh
l) OPT="-alv";; # hardlink (GNU cp)
L) OPT="-av";; # copy (GNU cp)
a) FIND="fall";; # find all
A) FIND="fdot";; # find non CVS/ .???/
x) set -x;; # trace
e) EXTP="${OPTARG}";; # hostname -f
h) HOST="${OPTARG}";; # user@remotehost.example.com
T) MODE="TEST";; # test find mode
\?) echo "use -x for trace."
esac; done
shift $(expr $OPTIND - 1)
if [ $# -gt 0 ]; then
for x in $@; do cp $OPT $x $x.$TIME; done
elif [ $MODE = GNUCP ]; then
mkdir -p "../$BU";chmod 700 "../$BU";cp $OPT . "../$BU/"
elif [ $MODE = CPIOP ]; then
mkdir -p "../$BU";chmod 700 "../$BU"
$FIND|cpio --null --sparse -pvd ../$BU
elif [ $MODE = CPIOI ]; then
$FIND|cpio -ov --null | ( mkdircd "../$BU"&&cpio -i )
elif [ $MODE = CPIOSSH ]; then
$FIND|cpio -ov --null|ssh -C $HOST "( mkdircd \"$EXTP/$BU\"&&cpio -i )"
elif [ $MODE = AFIOSSH ]; then
$FIND|afio -ov -0 -|ssh -C $HOST "( mkdircd \"$EXTP/$BU\"&&afio -i - )"
elif [ $MODE = TARSSH ]; then
(tar cvf - . )|ssh -C $HOST "( mkdircd \"$EXTP/$BU\"&& tar xvfp - )"
elif [ $MODE = RSYNCSSH ]; then
rsync -rlpt ./ "${HOST}:${EXTP}-${BKUP}-${TIME}"
else
echo "Any other idea to backup?"
$FIND |xargs -0 -n 1 echo
fiThis is meant to be command examples. Please read script and test it by yourself.
I keep this bkup in my /usr/local/bin/ directory. I issue bkup command without any option in the working directory whenever I need a temporary snapshot backup.
For making snapshot history of a source tree, it is easier and space efficient to use git(7) (see @{@git@}@).
Removable mass storage device
When sharing data with other system via removable mass storage device, you should format it with common [http://en.wikipedia.org/wiki/File_system filesystem] supported by both systems. Here are some hints.
List of the filesystem to chose for the removable storage device with the typical usage scenario. |
|
filesystem |
typical usage scenario |
Cross platform sharing of data on the floppy disk. (<=32MiB) |
|
Cross platform sharing of data on the small harddisk like device. (<=2GiB) |
|
Cross platform sharing of data on the large harddisk like device. (<=8TiB, supported by newer than MS Windows95 OSR2) |
|
[http://en.wikipedia.org/wiki/Iso9660 ISO9660] |
Cross platform sharing of static data on CD-R and DVD+/-R |
Incremental data writing on CD-R and DVD+/-R (new) |
|
[http://en.wikipedia.org/wiki/Minix_file_system MINIX filesystem] |
Space efficient unix file data storage on the floppy disk. |
[http://en.wikipedia.org/wiki/Ext2 ext2 filesystem] |
Sharing of data on the harddisk like device with older Linux systems. |
[http://en.wikipedia.org/wiki/Ext3 ext3 filesystem] |
Sharing of data on the harddisk like device with current Linux systems. (Journaling file system) |
The removable harddisk like device may be:
- USB/Firewire connected harddisk,
- USB connected flash memory card,
- USB flash memory stick, or
- USB connected digital camera.
The FAT filesystem is supported by almost all modern operating systems and is quite useful for the data exchange purpose via the harddisk like media.
When formatting harddisk like device for cross platform sharing of data with the FAT filesystem, the following should be the safe steps:
Partitioning the harddisk like device with fdisk, cfdisk or parted command into a single primary partition and to mark it as:
- type-"6" for FAT16 or
- type-"c" for FAT32 (LBA).
Formatting the primary partition with the mkfs.vfat command
with just its device name, e.g. "/dev/sda1" for FAT16, or
with the explicit option and its device name, e.g. "-F 32 /dev/sda1" for FAT32.
When using the FAT or ISO9660 filesystem for sharing data, the following should be the safe considerations:
Archiving files into an archive file first using the tar(1), cpio(1), or afio(1) command to retain the long filename, the symbolic link, the original unix file permission and the owner information.
Splitting the archive file size into less than 2 GiB chunks with the "split(1)" command to protect it from the file size limitation.
- Encrypting the archive file to secure its contents from the unauthorized access.
For FAT filesystems by its design, the maximum file size is (2^32 - 1) bytes = (4GiB - 1 byte). For some applications on the older 32 bit OSs, the maximum file size was even smaller (2^31 - 1) bytes = (42GiB - 1 byte). Debian does not suffer the latter problem.
Microsoft itself does not recommends to use FAT for drives or partitions of over 200 MB. Microsoft highlights its short comings such as inefficient disk space usage in their "[http://support.microsoft.com/kb/100108/EN-US/ Overview of FAT, HPFS, and NTFS File Systems]". Of course for the Linux, we should normally use the ext3 filesystem.
For more on filesystems and accessing filesystems, please read "[http://tldp.org/HOWTO/Filesystems-HOWTO.html Filesystems HOWTO]".
Sharing data via network
When sharing data with other system via network, you should use common service. Here are some hints.
List of the network service to chose with the typical usage scenario. |
|
network service |
typical usage scenario |
[http://en.wikipedia.org/wiki/Server_Message_Block SMB/CIFS] network mounted filesystem with [http://en.wikipedia.org/wiki/Samba_(software) Samba] |
Sharing files with "Microsoft Windows Network". See smb.conf(5) and [http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/ The Official Samba 3.2.x HOWTO and Reference Guide] or the samba-doc package. |
[http://en.wikipedia.org/wiki/Network_File_System_(protocol) NFS] network mounted filesystem with the Linux kernel |
Sharing files with "Unix/Linux Network". See exports(5) and [http://tldp.org/HOWTO/NFS-HOWTO/index.html Linux NFS-HOWTO]. |
[http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol HTTP] service |
Sharing file between the web server/client. |
[http://en.wikipedia.org/wiki/Https HTTPS] service |
Sharing file between the web server/client with encrypted Secure Sockets Layer (SSL) or [http://en.wikipedia.org/wiki/Transport_Layer_Security Transport Layer Security] (TLS). |
[http://en.wikipedia.org/wiki/File_Transfer_Protocol FTP] service |
Sharing file between the FTP server/client. |
Although the above network mounted filesystems are quite convenient for sharing data in secure environment, these use non-encrypted communication and must be run behind the firewall on the secured network.
Archive media
When choosing [http://en.wikipedia.org/wiki/Computer_data_storage computer data storage media] for important data archive, you should be careful about their limitations. For small personal data back up, I use CD-R and DVD-R by the brand name company and store in a cool, dry, clean environment. (Tape archive media seem to be popular for professional use.)
[http://en.wikipedia.org/wiki/Safe A fire-resistant safe] are usually meant for paper documents. Most of the computer data storage media have less temperature tolerance than paper. I usually rely on multiple secure encrypted copies stored in multiple secure locations.
Optimistic storage life of archive media seen on the net (mostly from vendor info):
- 100+ years : acid free paper with ink
- 100 years : optical storage (CD/DVD, CD/DVD-R)
- 30 years : magnetic storage (tape, floppy)
- 20 years : phase change optical storage (CD-RW)
These do not count on the mechanical failures due to handling etc.
Optimistic write cycle of archive media seen on the net (mostly from vendor info):
- 250,000+ cycles : Harddisk drive
- 10,000+ cycles : Flash memory
- 1,000 cycles : CD/DVD-RW
- 1 cycles : CD/DVD-R, paper
Figures of storage life and write cycle here should not be used for decisions on any critical data storage. Please consult the specific product information provided by the manufacture.
Since CD/DVD-R and paper have only 1 write cycle, they inherently prevent accidental data loss by overwriting. This is advantage!
If you need fast and frequent backup of large amount of data, a harddisk on a remote host linked by a fast network connection, may be the only realistic option.
The binary data
Make the disk image file
The disk image file, disk.img, of an unmounted device, e.g., the second SCSI drive /dev/sdb, can be made using cp(1) or dd(1):
# cp /dev/sda disk.img # dd if=/dev/sda of=disk.img
The disk image of the PC's master boot record (MBR) which reside on the first sector on the primary IDE disk partial disk can be made by using dd(1):
# dd if=/dev/hda of=mbr.img bs=512 count=1 # dd if=/dev/hda of=mbr-nopart.img bs=446 count=1 # dd if=/dev/hda of=mbr-part.img skip=446 bs=1 count=66
mbr.img : the MBR with the partition table.
mbr-nopart.img : the MBR without the partition table.
part.img : the partition table of the MBR only..
If you have a SCSI device (including the new serial ATA drive) as the the boot disk, substitute "/dev/hda" with "/dev/sda".
If you are making an image of a disk partition of the original disk, substitute "/dev/hda" with "/dev/hda1" etc.
The boot record structure of the different architecture computer such as Sun workstation are different. New Intel-based Macs use new EFI partition scheme and are different too.
Writing directly to the disk
The disk image file, disk.img can be written to an unmounted device, e.g., the second SCSI drive /dev/sdb with matching size, by dd(1):
# dd if=disk.img of=/dev/sda
Similry, the disk partition image file, disk.img can be written to an unmounted partition, e.g., the first partition of the second SCSI drive /dev/sdb1 with matching size, by dd(1):
# dd if=disk.img of=/dev/sda1
Mount the disk image file
If disk.img contains an image of the disk contents and the original disk had a disk configuration which gives xxxx = (bytes/sector) * (sectors/cylinder), then the following will mount it to /mnt:
# mount -o loop,offset=xxxx disk.img /mnt
Note that most hard disks have 512 bytes/sector. This offset is to skip MBR of the hard disk. You can skip offset in the above example, if disk.img contains
- only an image of a disk partition of the original hard disk, or
- only an image of the original floppy disk.
Make the ISO9660 image file
The ISO9660 image file, cd.img, from the source directory tree at source_directory can be made using genisoimage(1) command.
# genisoimage -r -J -T -V volume_id -o cd.img source_directory
To make the disk image directly from the CD-ROM device using cp(1) or dd(1) has a few problems. The first run of the dd command may cause an error message and may yield a shorter disk image with a lost tail-end. The second run of dd command may yield a larger disk image with garbage data attached at the end on some systems if the data size is not specified. Only the second run of the dd command with the correct data size specified, and without ejecting the CD after an error message, seems to avoid these problems. If for example the image size displayed by df is 46301184 blocks, use the following command twice to get the right image (this is my empirical information):
# dd if=/dev/cdrom of=cd.img bs=2048 count=$((46301184/2))
Writing directly to the CD/DVD-R/RW
You can find a usable device by:
# wodim --devices
Then the blank CD-R is inserted to the device, and the ISO9660 image file, cd.img is written to this device, e.g., /dev/hda, by wodim(1):
# wodim -v -eject dev=/dev/hda cd.img
If CD-RW is used instead of CD-R, do this instead:
# wodim -v -eject blank=fast dev=/dev/hda cd.img
DVD is only a large CD to wodim(1).
Mount the ISO9660 image file
If cd.img contains an ISO9660 image, then the following will mount it to /cdrom:
# mount -t iso9660 -o ro,loop cd.img /cdrom
Split a large file into small files
When a data is too big to backup, you can back up a large file into, e.g. 2000MiB chunks and merge those files into a large file.
$ split -b 2000m large_file $ cat x* >large_file
Please make sure you do not have any file starting with "x" to avoid the file name crash.
Clear file contents
In order to clear the contents of a file such as a log file, do not use rm to delete the file and then create a new empty file, because the file may still be accessed in the interval between commands. The following is the safe way to clear the contents of the file.
$ :>file_to_be_cleared
Dummy files
The following commands will create dummy or empty files:
$ dd if=/dev/zero of=5kb.file bs=1k count=5 $ dd if=/dev/urandom of=7mb.file bs=1M count=7 $ touch zero.file $ : > alwayszero.file
5kb.file is 5KB of zeros.
7mb.file is 7MB of random data.
zero.file is 0 byte file (if file exists, the file contents are kept while updating mtime.)
alwayszero.file is always 0 byte file (if file exists, the file contents are not kept while updating mtime.)
For example, the following commands executed from the shell of the Debian boot floppy will erase all the content of the hard disk /dev/hda completely for most practical uses.
# dd if=/dev/urandom of=/dev/hda ; dd if=/dev/zero of=/dev/hda
Undelete deleted but still open file
Even if you have accidentally deleted a file, as long as that file is still being used by some application (read or write mode), it is possible to recover such a file.
- On one terminal:
$ echo foo > bar $ less bar
- Then on another terminal:
$ ps aux | grep ' less[ ]' osamu 4775 0.0 0.0 92200 884 pts/8 S+ 00:18 0:00 less bar $ rm bar $ ls -l /proc/4775/fd | grep bar lr-x------ 1 osamu osamu 64 2008-05-09 00:19 4 -> /home/osamu/bar (deleted) $ cat /proc/4775/fd/4 >bar $ ls -l -rw-r--r-- 1 osamu osamu 4 2008-05-09 00:25 bar $ cat bar foo
Alternatively, when you have the lsof command installed, on another terminal:
$ ls -li bar 2228329 -rw-r--r-- 1 osamu osamu 4 2008-05-11 11:02 bar $ lsof |grep bar|grep less less 4775 osamu 4r REG 8,3 4 2228329 /home/osamu/bar $ rm bar $ lsof |grep bar|grep less less 4775 osamu 4r REG 8,3 4 2228329 /home/osamu/bar (deleted) $ cat /proc/4775/fd/4 >bar $ ls -li bar 2228302 -rw-r--r-- 1 osamu osamu 4 2008-05-11 11:05 bar $ cat bar foo
Data security infrastructure
The data security infrastructure is provided by the combination of data encryption tool, message digest tool, and signature tool.
List of data security infrastructure tools. |
1 |
2 |
3 |
package |
popcon |
size |
function |
gnupg |
- |
- |
GNU privacy guard - OpenPGP encryption and signing tool. gpg(1) |
gnupg-doc |
- |
- |
GNU Privacy Guard documentation |
gpgv |
- |
- |
GNU privacy guard - signature verification tool |
coreutils |
- |
- |
The md5sum command computes and checks MD5 message digest |
coreutils |
- |
- |
The sha1sum command computes and checks SHA1 message digest |
openssl |
- |
- |
The "openssl dgst" command computes message digest (OpenSSL). dgst(1ssl) |
Key management for Gnupg
Here are the basic key management commands:
List of gnu privacy gurd commands for the key management |
|
command |
effects |
gpg --gen-key |
generate a new key |
gpg --gen-revoke my_user_ID |
generate revoke key for my_user_ID |
gpg --edit-key user_ID |
"help" for help, interactive |
gpg -o file --exports |
export all keys to file |
gpg --imports file |
import all keys from file |
gpg --send-keys user_ID |
send key of user_ID to keyserver |
gpg --recv-keys user_ID |
recv. key of user_ID from keyserver |
gpg --list-keys user_ID |
list keys of user_ID |
gpg --list-sigs user_ID |
list sig. of user_ID |
gpg --check-sigs user_ID |
check sig. of user_ID |
gpg --fingerprint user_ID |
check fingerprint of "user_ID" |
gpg --refresh-keys |
update local keyring |
Here are the meaning of trust codes:
List of the meaning of trust codes. |
|
code |
trust |
- |
No owner trust assigned / not yet calculated. |
e |
Trust calculation has failed. |
q |
Not enough information for calculation. |
n |
Never trust this key. |
m |
Marginally trusted. |
f |
Fully trusted. |
u |
Ultimately trusted. |
The following will upload my key "?A8061F32" to the popular keyserver hkp://subkeys.pgp.net:
$ gpg --keyserver hkp://subkeys.pgp.net --send-keys A8061F32
A good default keyserver set up in $HOME/.gnupg/gpg.conf (or old location $HOME/.gnupg/options) contains:
keyserver hkp://subkeys.pgp.net
The following will obtain unknown keys from the keyserver:
$ gpg --list-sigs | grep '^sig' | grep '[User id not found]' | \
awk '{print $2}' | sort | uniq | xargs gpg --recv-keysThere were bug in [http://sourceforge.net/projects/pks/ OpenPGP Public Key Server] (pre version 0.9.6) which corrupted key with more than 2 sub-keys. The newer gnupg (>1.2.1-2) can handle these corrupted subkeys. See gpg(1) manpage under --repair-pks-subkey-bug option.
Using GnuPG with files
File handling:
List of gnu privacy guard commands on files |
|
command |
effects |
gpg -a -s file |
sign file into ascii armored file.asc |
gpg --armor --sign file |
, , |
gpg --clearsign file |
clear-sign message |
gpg --clearsign --not-dash-escaped patchfile |
clear-sign patchfile |
gpg --verify file |
verify clear-signed file |
gpg -o file.sig -b file |
create detached signature |
gpg -o file.sig --detach-sig file |
, , |
gpg --verify file.sig file |
verify file with file.sig |
gpg -o crypt_file.gpg -r name -e file |
public-key encryption intended for name from file to binary crypt_file.gpg |
gpg -o crypt_file.gpg --recipient name --encrypt file |
, , |
gpg -o crypt_file.asc -a -r name -e file |
public-key encryption intended for name from file to ASCII armored crypt_file.asc |
gpg -o crypt_file.gpg -c file |
symmetric encryption from file to crypt_file.gpg |
gpg -o crypt_file.gpg --symmetric file |
, , |
gpg -o crypt_file.asc -a -c file |
symmetric encryption intended for name from file to ASCII armored crypt_file.asc |
gpg -o file -d crypt_file.gpg -r name |
decryption |
gpg -o file --decrypt crypt_file.gpg |
, , |
Using GnuPG with Mutt
Add the following to ~/.muttrc to keep a slow GnuPG from automatically starting, while allowing it to be used by typing "S" at the index menu.
macro index S ":toggle pgp_verify_sig\n" set pgp_verify_sig=no
Using GnuPG with Vim
The gnupg plugin let you run GnuPG transparently for files with extension .gpg, .asc, and .ppg.
# aptitude install vim-scripts vim-addon-manager $ vim-addons install gnupg
The MD5 sum
The md5sum program provides utility to make a digest file using the method in [http://tools.ietf.org/html/rfc1321 rfc1321] and verifying each file with it.
$ md5sum foo bar >baz.md5 $ cat baz.md5 d3b07384d113edec49eaa6238ad5ff00 foo c157a79031e1c40f85931829bc5fc552 bar $ md5sum -c baz.md5 foo: OK bar: OK
The computation for the MD5 sum is less CPU intensive than the one for the cryptographic signiture by the Gnupg. Usually, only the top level digest file is cryptographically signed to ensure data integrity.
Source code merge tools
There are many merge tools for the source code. Following commands caught my eyes.:
List of source code merge tools. |
2 |
3 |
4 |
|
command |
package |
popcon |
size |
description |
diff(1) |
diff |
37745 |
- |
This compares files line by line. |
diff3(1) |
diff |
37745 |
- |
This compares and merges three files line by line. |
vimdiff(1) |
vim |
15655 |
- |
This compares 2 files side by side in vim. |
patch(1) |
patch |
8068 |
- |
This applies a diff file to an original. |
dpatch(1) |
dpatch |
1446 |
- |
This manage series of patches for debian package. |
diffstat(1) |
diffstat |
1008 |
- |
This produces a histogram of changes by the diff. |
combinediff(1) |
patchutils |
759 |
- |
This creates a cumulative patch from two incremental patches. |
dehtmldiff(1) |
patchutils |
x |
- |
This extracts a diff from an HTML page. |
filterdiff(1) |
patchutils |
x |
- |
This extracts or excludes diffs from a diff file. |
fixcvsdiff(1) |
patchutils |
x |
- |
This fixes diff files created by CVS that "patch" mis-interprets. |
flipdiff(1) |
patchutils |
x |
- |
This exchanges the order of two patches. |
grepdiff(1) |
patchutils |
x |
- |
This shows which files are modified by a patch matching a regex. |
interdiff(1) |
patchutils |
x |
- |
This shows differences between two unified diff files. |
lsdiff(1) |
patchutils |
x |
- |
This shows which files are modified by a patch. |
recountdiff(1) |
patchutils |
x |
- |
This recomputes counts and offsets in unified context diffs. |
rediff(1) |
patchutils |
x |
- |
This fixes offsets and counts of a hand-edited diff. |
splitdiff(1) |
patchutils |
x |
- |
This separates out incremental patches. |
unwrapdiff(1) |
patchutils |
x |
- |
This demangles patches that have been word-wrapped. |
wiggle(1) |
wiggle |
451 |
- |
This applies rejected patches. |
quilt(1) |
quilt |
430 |
- |
This manage series of patches. |
meld(1) |
meld |
256 |
- |
This is a GTK graphical file comparator and merge tool. |
xxdiff(1) |
xxdiff |
182 |
- |
This is a plain X graphical file comparator and merge tool. |
dirdiff(1) |
dirdiff |
61 |
- |
This displays and merges changes between directory trees. |
docdiff(1) |
docdiff |
38 |
- |
This compares two files word by word / char by char. |
imediff2(1) |
imediff2 |
24 |
- |
This is an interactive full screen 2-way merge tool. |
makepatch(1) |
makepatch |
20 |
- |
This generates extended patch files. |
applypatch(1) |
makepatch |
20 |
- |
This applies extended patch files. |
wdiff(1) |
wdiff |
16 |
- |
This displays word differences between text files. |
Extract differences for source files
Following one of these procedures will extract differences between two source files and create unified diff files file.patch0 or file.patch1 depending on the file location:
$ diff -u file.old file.new > file.patch0 $ diff -u old/file new/file > file.patch1
Merge updates for source files
The diff file (alternatively called patch file) is used to send a program update. The receiving party will apply this update to another file by:
$ patch -p0 file < file.patch0 $ patch -p1 file < file.patch1
3 way merge updates
If you have three versions of source code, you can merge them more effectively using diff3:
$ diff3 -m file.mine file.old file.yours > file
The version control system
Here are a summary of the version control system (VCS) on the Debian system:
List of version control system tools. |
1 |
2 |
3 |
||
package |
popcon |
size |
tool |
VCS type |
comment |
cssc |
7 |
- |
[http://cssc.sourceforge.net/ CSSC] |
local |
Clone of the Unix SCCS (deprecated) |
rcs |
1658 |
- |
local |
"Unix SCCS done right" |
|
cvs |
4265 |
- |
[http://cvs.nongnu.org/ CVS] |
remote |
The previous standard remote VCS |
subversion |
5276 |
- |
[http://subversion.tigris.org/ Subversion] |
remote |
"CVS done right", the new de facto standard remote VCS |
git-core |
512 |
- |
[http://git.or.cz/ Git] |
distributed |
fast DVCS in C (used by the Linux kernel and others) |
mercurial |
256 |
- |
[http://www.selenic.com/mercurial/wiki/ Mercurial] |
distributed |
DVCS with python and some C. |
darcs |
- |
- |
[http://darcs.net/ Darcs] |
distributed |
DVCS with smart algebra of patches (slow). |
bzr |
158 |
- |
[http://bazaar-vcs.org/ Bazaar] |
distributed |
DVCS with python (used by the Ubuntu) |
The VCS is sometimes called as the revision control system (RCS), or the software configuration management (SCM) tool.
The distributed VCS such as the Git is the tool of choice these days. CVS and Subversion may still be useful to join some existing open source program activities.
The git package is "GNU Interactive Tools" which is not the DVCS.
Native VCS commands
Here is an oversimplified comparison of native VCS commands to provide the big picture. The typical command sequence may require options and arguments.
Comparison of native VCS commands. |
|
|
|
CVS |
Subversion |
Git |
function |
cvs init |
svn create |
git-init |
create the (local) repository |
cvs login |
- |
- |
login to the remote repository |
cvs co |
svn co |
git-clone |
check out the remote repository as the working tree |
cvs up |
svn up |
git-pull |
update the working tree by merging the remote repository |
cvs add |
svn add |
git-add . |
add file(s) in the working tree to the VCS |
cvs rm |
svn rm |
git-rm |
remove file(s) in working tree from the VCS |
cvs ci |
svn ci |
- |
commit changes to the remote repository |
- |
- |
git-commit -a |
commit changes to the local repository |
- |
- |
git-push |
update the remote repository by the local repository |
cvs status |
svn status |
git-status |
display the working tree status from the VCS |
cvs diff |
svn diff |
git-diff |
diff <reference_repository> <working_tree> |
- |
- |
git-repack -a -d; git-prune |
repack the local repository into single pack. |
The "git-command" may be typed as "git command".
The Git can work directly with different VCS repositories such as ones provided by CVS and Subversion and provides the local repository for local changes with the git-cvs and git-svn packages. See [http://www.kernel.org/pub/software/scm/git/docs/cvs-migration.html git for CVS users], [http://live.gnome.org/GitForGnomeDevelopers Git for GNOME developers] and @{@git@}@.
CVS
Check /usr/share/doc/cvs/html-cvsclient, /usr/share/doc/cvs/html-info, /usr/share/doc/cvsbook with lynx or run info cvs and man cvs for detailed information.
Installing a CVS server
The following setup will allow commits to the CVS repository only by a member of the "src" group, and administration of CVS only by a member of the "staff" group, thus reducing the chance of shooting oneself.
# cd /var/lib; umask 002; mkdir cvs # export CVSROOT=/var/lib/cvs # cd $CVSROOT # chown root:src . # chmod 2775 . # cvs -d $CVSROOT init # cd CVSROOT # chown -R root:staff . # chmod 2775 . # touch val-tags # chmod 664 history val-tags # chown root:src history val-tags
You may restrict creation of new project by changing the owner of $CVSROOT directory to "root:staff and its permission to "3775".
Use local CVS server
The following will set up shell environments for the local access to the CVS repository:
$ export CVSROOT=/var/lib/cvs
Use remote CVS pserver
The following will set up shell environments for the read-only remote access to the CVS repository without SSH (use RSH protocol capability in cvs):
$ export CVSROOT=:pserver:account@cvs.foobar.com:/var/lib/cvs $ cvs login
This is prone to eavesdropping attack.
Anonymous CVS (download only)
The following will set up shell environments for the read-only remote access to the CVS repository:
$ export CVSROOT=:pserver:anonymous@cvs.sf.net:/cvsroot/qref $ cvs login $ cvs -z3 co qref
Use remote CVS through ssh
The following will set up shell environments for the read-only remote access to the CVS repository with SSH:
$ export CVSROOT=:ext:account@cvs.foobar.com:/var/lib/cvs
or for ?SourceForge:
$ export CVSROOT=:ext:account@cvs.sf.net:/cvsroot/qref
You can also use public key authentication for SSH which eliminates the password prompt.
Create a new CVS archive
For,
Assumption for the CVS archive. |
|
|
ITEM |
VALUE |
MEANING |
source tree |
~/project-x |
All source codes |
Project name |
project-x |
Name for this project |
Vendor Tag |
Main-branch |
Tag for the entire branch |
Release Tag |
Release-initial |
Tag for a specific release |
Then,
$ cd ~/project-x
- create a source tree ...
$ cvs import -m "Start project-x" project-x Main-branch Release-initial $ cd ..; rm -R ~/project-x
Work with CVS
To work with project-x using the local CVS repository:
$ mkdir -p /path/to; cd /path/to $ cvs co project-x
- get sources from CVS to local
$ cd project-x
- make changes to the content ...
$ cvs diff -u
similar to "diff -u repository/ local/"
$ cvs up -C modified_file
- undo changes to a file
$ cvs ci -m "Describe change"
- save local sources to CVS
$ vi newfile_added $ cvs add newfile_added $ cvs ci -m "Added newfile_added" $ cvs up
- merge latest version from CVS.
To create all newly created subdirectories from CVS, use "cvs up -d -P" instead.
Watch out for lines starting with "C filename" which indicates conflicting changes.
- unmodified code is moved to .#filename.version .
search for "<<<<<<<" and ">>>>>>>" in the files for conflicting changes.
- edit file to fix conflicts.
$ cvs tag Release-1
- add release tag
- edit further ...
$ cvs tag -d Release-1
- remove release tag
$ cvs ci -m "more comments" $ cvs tag Release-1
* re-add release tag {{ $ cd /path/to $ cvs co -r Release-initial -d old project-x }}}
get original version to "/path/to/old" directory
$ cd old $ cvs tag -b Release-initial-bugfixes
create branch (-b) tag "Release-initial-bugfixes"
- now you can work on the old version (Tag is sticky)
$ cvs update -d -P
- don't create empty directories
- source tree now has sticky tag "Release-initial-bugfixes"
- work on this branch ... while someone else making changes too
$ cvs up -d -P
- sync with files modified by others on this branch
$ cvs ci -m "check into this branch" $ cvs update -kk -A -d -P
- remove sticky tag and forget contents
- update from main trunk without keyword expansion
$ cvs update -kk -d -P -j Release-initial-bugfixes
- merge from Release-initial-bugfixes branch into the main
- trunk without keyword expansion. Fix conflicts with editor.
$ cvs ci -m "merge Release-initial-bugfixes" $ cd $ tar -cvzf old-project-x.tar.gz old
make archive. use "-j" if you want .tar.bz2 .
$ cvs release -d old
- remove local source (optional)
Notable options for CVS commands (use as first argument(s) to cvs). |
|
option |
meaning |
-n |
dry run, no effect |
-t |
display messages showing steps of cvs activity |
Export files from CVS
To get the latest version from CVS, use "tomorrow":
$ cvs ex -D tomorrow module_name
Administer CVS
Add alias to a project (local server):
$ export CVSROOT=/var/lib/cvs $ cvs co CVSROOT/modules $ cd CVSROOT $ echo "px -a project-x" >>modules $ cvs ci -m "Now px is an alias for project-x" $ cvs release -d . $ cvs co -d project px
- check out project-x (alias:px) from CVS to directory project
$ cd project
- make changes to the content ...
In order to perform above procedure, you should have the appropriate file permission.
File permissions in repository
CVS will not overwrite the current repository file but replaces it with another one. Thus, write permission to the repository directory is critical. For every new repository creation, run the following to ensure this condition if needed.
# cd /var/lib/cvs # chown -R root:src repository # chmod -R ug+rwX repository # chmod 2775 repository
Execution bit
A file's execution bit is retained when checked out. Whenever you see execution permission problems in checked-out files, change permissions of the file in the CVS repository with the following command.
# chmod ugo-x filename
Subversion
Subversion is a next-generation version control system that is intended to replace CVS, so it has most of CVS's features. Generally, Subversion's interface to a particular feature is similar to CVS's, except where there's a compelling reason to do otherwise.
Installing a Subversion server
You need to install the subversion, libapache2-svn and subversion-tools packages to set up a server.
Setting up a repository
Currently, the subversion package does not set up a repository, so one must be set up manually. One possible location for a repository is in /var/local/repos.
Create the directory:
# mkdir -p /var/local/repos
Create the repository database:
# svnadmin create /var/local/repos
Make the repository writable by the WWW server:
# chown -R www-data:www-data /var/local/repos
Configuring Apache2
To allow access to the repository via user authentication, add (or uncomment) the following in /etc/apache2/mods-available/dav_svn.conf:
<Location /repos>
DAV svn
SVNPath /var/local/repos
AuthType Basic
AuthName "Subversion repository"
AuthUserFile /etc/subversion/passwd
<LimitExcept GET PROPFIND OPTIONS REPORT>
Require valid-user
</LimitExcept>
</Location>Then, create a user authentication file with the command:
htpasswd2 -c /etc/subversion/passwd some-username
Restart Apache2, and your new Subversion repository will be accessible with the URL http://hostname/repos.
Subversion usage examples
The following sections teach you how to use different commands in Subversion.
Create a new Subversion archive
To create a new Subversion archive, type the following:
$ cd ~/your-project # go to your source directory $ svn import http://localhost/repos your-project project-name -m "initial project import"
This creates a directory named project-name in your Subversion repository which contains your project files. Look at http://localhost/repos/ to see if it's there.
Working with Subversion
Working with project-y using Subversion:
$ mkdir -p /path/to ;cd /path/to $ svn co http://localhost/repos/project-y
- Check out sources
$ cd project-y
- do some work ...
$ svn diff
similar to "diff -u repository/ local/"
$ svn revert modified_file
- undo changes to a file
$ svn ci -m "Describe changes"
- check in your changes to the repository
$ vi newfile_added $ svn add newfile_added $ svn add new_dir
- recursively add all files in new_dir
$ svn add -N new_dir2
- non recursively add the directory
$ svn ci -m "Added newfile_added, new_dir, new_dir2" $ svn up
- merge in latest version from repository
$ svn log
- shows all changes committed
$ svn copy http://localhost/repos/project-y \
http://localhost/repos/project-y-branch \
-m "creating my branch of project-y"- branching project-y
$ svn copy http://localhost/repos/project-y \
http://localhost/repos/projct-y-release1.0 \
-m "project-y 1.0 release"- added release tag.
- note that branching and tagging are the same. The only difference is that branches get committed whereas tags do not.
- make changes to branch ...
$ svn merge http://localhost/repos/project-y \ http://localhost/repos/project-y-branch
- merge branched copy back to main copy
$ svn co -r 4 http://localhost/repos/project-y
- get revision 4
Git
The Git can do everything for both local and remote source code management. This means that you can record the source code changes without having the network connectivity to the remote repository.
Git references
There are good references for the Git.
[http://www.kernel.org/pub/software/scm/git/docs/v1.3.3/git.html manpage: git(7)]
[http://www.kernel.org/pub/software/scm/git/docs/user-manual.html Git User's Manual]
[http://www.kernel.org/pub/software/scm/git/docs/tutorial.html A tutorial introduction to git]
[http://www.kernel.org/pub/software/scm/git/docs/tutorial-2.html A tutorial introduction to git: part two]
[http://www.kernel.org/pub/software/scm/git/docs/v1.3.3/everyday.html Everyday GIT With 20 Commands Or So]
[http://www.kernel.org/pub/software/scm/git/docs/cvs-migration.html git for CVS users] : This also describes how to set up server like CVS and extract old data from CVS into there.
[http://git.or.cz/course/svn.html Git - SVN Crash Course]
[http://git.or.cz/course/stgit.html ?StGit Crash Course]
The git-gui and gitk commands make using the Git real easy.
Do not use the tag string with spaces in it even if some tools such as gitk allow you to use it. It will choke some other git commands.
Git commands
Even if your upstream uses different VCS, it is good idea to use git for local activity since you can manage your local copy of source tree without the network connection to the upstream. Here are the commands used with the Git.
List of git packages and commands. |
2 |
3 |
4 |
|
command |
package |
popcon |
size |
description |
N/A |
git-doc |
*862 |
- |
This provide the documentation for the Git. |
git(7) |
git-core |
512 |
- |
The main command for the Git. |
gitk(1) |
gitk |
94 |
- |
The GUI Git repository browser with history. |
git-gui(1) |
git-gui |
28 |
- |
The GUI for the Git. (No history) |
git-svnimport(1) |
git-svn |
68 |
- |
This import the data out of Subversion into the Git. |
git-svn(1) |
git-svn |
68 |
- |
This provides bidirectional operation between the Subversion and the Git. |
git-cvsimport(1) |
git-cvs |
49 |
- |
This import the data out of CVS into the Git. |
git-cvsexportcommit(1) |
git-cvs |
49 |
- |
This exports a commit to a CVS checkout from the Git. |
git-cvsserver(1) |
git-cvs |
49 |
- |
A CVS server emulator for the Git. |
git-send-email(1) |
git-email |
37 |
- |
This sends a collection of patches as email from the Git. |
stg(1) |
stgit |
31 |
- |
This is quilt on top of git. (Python) |
git-buildpackage(1) |
git-buildpackage |
17 |
- |
This automates the Debian packaging with the Git. |
guilt(7) |
guilt |
9 |
- |
This is quilt on top of git. (SH/AWK/SED/...) |
