Translation(s): English - Italiano


The following is a list of non-interactive command-line utilities that are often forgotten. There are many situations where you can use them to simplify tedious tasks.

List of commands

The list in alphabetical order follows:

aumix

for a in `seq 1 100`; do sleep 1; aumix -v-1; done; poweroff

Slowly lowers your sound card output volume during 100 seconds, then power off the system.

awk

Awk is an entire scripting language, intended to help process text, as in a UNIX shell pipeline command (where some command generates data, you perhaps filter it using some utilities, etc., and then pipe that resulting text to an awk command or script, to generate reports or summaries, etc. As an example of the intended usage for awk, note that by default, awk will split the text into columns, using whitespace to separate them, and will place each column's field into a variable called $1 (for the first field), $2, and so on. Note that the tenth column is $10, the eleventh is $11, and so on. This alone, makes the creation of a simple filter quicker using awk, than Perl, for example, as it eliminates the need to do the splitting on whitespace, and placing that data into an array or a list of variables, etc. Here's an example to show how to use awk to take a list of comma-separated values, 1) removing the commas, 2) using awk to select lines and columns of data in which to perform an arbitrary calculation, and 3) using sed to remove the first line, which may be a don't-care value (just to show some sed usage.)

... | sed 's/,//g' | awk 'BEGIN { oldval = 0 } ($1 ~ /^data:/ && $2 < 1024) { val = $4 - $3; sum += $6; print (val - oldval) / 2, val, $4, $5; oldval = val } END { print $sum }' | sed '1d' | more

basename

A very simple utility (not a shell built-in, just a utility bundled in the package "coreutils"), which, when given a string (filesystem path to a file) as an argument, will return just the file name alone. In other words, this utility strips off all the directory levels, leaving the filename alone. Where would you use this? I've found it to be helpful, for example, when creating a for-loop in a shell script, or when renaming files using some complex scheme, and I'm operating on a list of paths--this utility allows me to pull out just the actual filename, modify it somehow, and then recombine the new filename with the original path, using the "dirname" utility (mentioned below, and also provided by the "coreutils" package.)

Here's an example (converting all FLAC files in a specified directory, to MP3 in /tmp) that assumes you're using Bash (or similar.)

for file in /path/to/flac/music/files/*.flac; do filename=$(basename "$file"); newname=${file//.flac/.mp3}; flac -dc "$file" | lame - - > /tmp/$newname; done

comm

A simple utility which provides quite a bit of power: given two text files whose contents are sorted (for example, by using sort and uniq), list the contents of both file sorted into three columns: the first column (zero indentation) shows those lines found only in the first command-line argument file, the second column shows those unique to the second file, and the third column shows those lines common (or found in) both. You can surpress any of the columns you'd like, so for example:

This utility makes it easy to manage lists of items.

dirname

Another simple utility bundled in the package "coreutils". Similar in nature to "basename" above, this utility simply extracts the directory hierarchy from a path. Useful in conjunction with "basename", within shell scripts, in simple one-line loops at the prompt, and so on. I use this sometimes, to help wrangle filenames and directories, for example, when converting FLAC files to MP3.

dd

dd if=/dev/fd0 of=floppy-backup.img

Backs up a floppy disk in a file called floppy-backup.img

fdupes

nice to get rid of double files

feh

A lightweight image viewer

feh -l *.jpg

file

File tells what kind of files are those listed on command line.

file -s /dev/hd*
file -i *

filters

Hehe, what's really funny, I meant the package filters which do funny things ;-)

A very powerful feature supported by almost every UNIX shell out there is the pipe. Which allows you to filter the output of commands with a large amount of flexibility. Here are some example ways in which you can filter STDOUT of a command (using a pipe not shown):

find

The find utility (provided by the findutils package) is a veritable swiss-army knife for searching (and performing tasks upon) files and directories. Among the more powerful features are defining the search criterion, including such things as size of the file, group-ownership of a file or directory, and so on. You can then perform some action on the file or directory. As usual, consulte the man page for details (man find.) Some examples:

This example will search through all files (that the user running the command has access to see) greater than a MB in size, sending that output to the sort utility, which will sort that list numerically on the 7th field (delineated by whitespace.)

find /usr/src -type f -size +1024k -ls | sort -r -n -b -k 7

This example will remove all core files that are found in /tmp/ (printing out each core file that's found.) Note the use of a pair of single curly-braces to denote where to substitute the found file name (and path), and how you need to "escape" the semicolon to end that -exec command:

find /tmp -type f -name core -exec rm -f {} \; -print

grep

Grep is a very useful utility too, to show which files contain the specified string or regular expression. You may find it useful to add the -E command line argument allowing you to use special regular-expression grammer like [[:space:]] to specify a whitespace character (tab or space), [[:digit:]] to match on a number, and [[:alnum:]] to match on a character or digit (think "alpha-numeric".) This way, you can search for stuff like this:

grep -iE '^[[:space:]]*some text[[:space:]]+[[:digit:]]+' some_file

It's also helpful to search recursively like so (note that I'm instructing grep to only show me lines that don't match, and I'm then counting the number of lines that it finds):

grep -iErv '^[[:space:]]*#' /usr/src/linux | wc -l

Sometimes you may find it useful to cat a bunch of simple text files, to see what's inside, but you want to prefix the lines with the name of each file (which you can't do with cat.) You can use grep to do that, by providing an empty search string, like this:

grep '' some/path/to/a/bunch/of/simple/files/*

imagemagick

Imagemagick is a collection of command line tools capable of several image manipulations.

To convert .jpg files to .png use:

for i in *jpg;do convert $i ${i%jpg}png;done

lame

xmp --stereo -f 44100 -b 16 -d file -o - the.xm | lame -x -r -s 44.1 --bithwidth 16 -m s - the.mp3

links2

links2 -dump www.debian.org

lshw

Show detailed information on your hardware configuration

perl

The life-altering perl interpreter (and its language) (provided in the perl-base package) can be harnessed to perform wondrous tasks at the command line. For example, you can use a so-called Perl "one-liner" to utilize Perl's regular expression power, to filter the output of something. For example:

perl -ne 's/blah/foo/g; print' < some_file > modified_file

this command will use the shell to feed the contents of some_file to STDIN of Perl, which will execute the provided command, performing a search-and-replace to replace all instances of "blah" with "foo", placing the modified lines into modified_file. (Note how you could have used cat to output the file to a pipe to the Perl invocation, but you can avoid tat needless use of cat by asking the shell to perform that same task.)

Here's a command that let me change the names of some MP3 files (my car stereo doesn't support OGG, unfortunately) after I'd already ripped them:

perl -e 'opendir(DIR, ".") || die "error: could not open current directory: $!\n"; @files = grep(!/^\.+/, readdir(DIR)); foreach $file (@files) { @elems = split(/\s+-\s+/, $file); $new = $elems[2] . " - " . $elems[0] . " - " . $elems[3]; `mv "$file" "$new"`; }'

This command may look complex, and you could certainly put those commands into a text file and make it a Perl script, but this demonstrates how you can rapidly develop Perl code to solve a problem. Here, I'm reading in all the files in a directory (excluding "." and ".."), then splitting the file names up based on the hyphens embedded in the names, and then rearranging those fields between the hyphens. Finally, it performs a mv command to rename the files after the new name has been created.

nc

Netcat (or nc) is a nice piece of code that can transfer stdin and stdout on a TCP or UDP connection. It can open both server and client connections. For example:

nc -l -p 10000 # Open a listening (server) connection
nc 127.0.0.1 10000 # Connect to 127.0.0.1:10000

Home made http client:

echo -ne "GET / HTTP/1.1\r\nHost:www.debian.org\r\n\r\n" | nc www.debian.org 80 | sed -e "1,/^\r$/ d" > the_page.html

sed is used to strip out HTTP response header. Note that nc will never close connection, unless you specifiy a timeout with -w option.

randomize-lines

madplay `find music -type f -name "*.mp3" | rl`

recode

rename

Rename is a perl script that can change filenames using regular expressions. It is part of the perl package. For example to rename all the files ending with .c in .cpp you can type:

rename s/.c$/.cpp/ *.c

rsync

copy files to and from remote machines. Rsync is able to greatly speed up

screen

EDITOR="emacs -nw" crontab -e
@reboot /usr/bin/screen -dmS irc irssi -c irc.gnu.org

setmixer

The ?setmixer tool is a non-interactive tool for reading or setting ?mixer volume levels

nice for checking for dangling symlinks

telnet

Telnet is a client for text based internet protocols (such as HTTP, POP3 and many more). See also ssh

unison

Like rsync, but allows bidirectional updates. Useful to keep different copies of the same files on many systems.


CategorySoftware CategorySystemAdministration CategoryCommandLineInterface