Differences between revisions 1 and 2
Revision 1 as of 2007-02-15 08:29:31
Size: 4267
Editor: Mac
Comment:
Revision 2 as of 2007-02-15 08:32:36
Size: 2893
Editor: Mac
Comment:
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
All 2.6 Linux kernels contain a ["gzip"]ped "cpio" format archive, which is extracted into rootfs when the kernel boots up. The kernel then checks to see if ["rootfs"] now contains a file "init", and if so it executes it as PID 1. Ramfs is a very simple ["filesystem"] that exports Linux's disk cacheing mechanisms (the page cache and dentry cache) as a dynamically resizable ram-based filesystem.
Line 3: Line 3:
At this point, this init process is responsible for bringing the system the rest of the way up, including locating and mounting the real root device (if any). If rootfs does not contain an init program after the embedded cpio archive is extracted into it, the kernel will fall through to the older code to locate and mount a root partition, then exec some variant of /sbin/init out of that. Normally all files are cached in memory by Linux. Pages of data read from backing store (usually the block device the filesystem is mounted on) are kept around in case it's needed again, but marked as clean (freeable) in case the Virtual Memory system needs the memory for something else. Similarly, data written to files is marked clean as soon soon as it has been written to backing store, but kept around for cacheing purposes until the VM reallocates the memory. A similar mechanism (the dentry cache) greatly speeds up access to
directories.
Line 5: Line 6:
All this differs from the old initrd in several ways: With ramfs, there is no backing store. Files written into ramfs allocate dentries and page cache as usual, but there's nowhere to write them to. This means the pages are never marked clean, so they can't be freed by the VM when it's looking to recycle memory.
Line 7: Line 8:
 * The old initrd was a separate file, while the initramfs archive is linked into the linux kernel image. (This archive is always linked into 2.6 kernels, but by default it's an empty archive.) The amount of code required to implement ramfs is tiny, because all the work is done by the existing Linux cacheing infrastructure. Basically, you're mounting the disk cache as a filesystem. Because of this, ramfs is not an optional component removable via menuconfig, since there would be negligible space savings.
Line 9: Line 10:
 * The old initrd file was a gzipped filesystem image (in some file format, such as ext2, that had to be built into the kernel), while the new ["initramfs"] archive is a gzipped cpio archive (like tar only simpler, see ["cpio"] (1) and Documentation/early-userspace/buffer-format.txt).
 * The program run by the old initrd (which was called initrd, not init) did some setup and then returned to the kernel, while the init program from initramfs does not return to the kernel. (If it needs to hand off control it can overmount / with a new root device and exec another init program; see ["switch root"]).
 * When switching another root device, initrd would pivot_root and then umount the ramdisk. But initramfs is rootfs: you shouldn't pivot_root rootfs and can't unmount it. Just delete everything out of it (except the new block device node, if any), overmount /, and exec the new init. (The ["klibc"] package contains a helper program in utils/run_init.c to do this for you, and other packages have adopted this as "switch_root".)
 The 2.6 kernel build process always creates a gzipped cpio format initramfs archive and links it into the resulting kernel binary. By default, this archive is blank. The config option CONFIG_INITRAMFS_SOURCE (for some reason buried under devices->block devices in ["menuconfig"]) can be used to specify a source for the initramfs archive, which will automatically be incorporated into the resulting binary. This option can point to an existing gzipped cpio archive, a directory containing files to be archived, or a text file specification such as the following example:
The older "ram disk" mechanism created a synthetic block device out of an area of ram and used it as backing store for a filesystem. This block device was of a fixed size, so the filesystem mounted on it was a fixed size. Using a ram disk also required unnecessarily copying memory from the fake block device into the page cache (and copying changes back out), as well as creating and destroying dentries. Plus it needed a filesystem driver (such as ext2) to format and interpret this data. This wastes memory, creates
unnecessary work for the CPU, wastes memory bus bandwidth, and pollutes the CPU caches. (There are tricks to avoid this copying by playing with the page tables, but they're unpleasantly complicated and turn out to be about as expensive as the copying anyway.)
Line 14: Line 13:
  dir /dev 755 0 0
  nod /dev/console 644 0 0 c 5 1
  nod /dev/loop0 644 0 0 b 7 0
  dir /bin 755 1000 1000
  slink /bin/sh busybox 777 0 0
  dir /proc 755 0 0
  dir /sub 755 0 0
  file /init initramfs/init.sh 755 0 0
  file /bin/busybox initramfs/busybox 755 0 0
More to the point, all the work ramfs is doing has to happen _anyway_, since all file access goes through the page and dentry caches. The ram disk is simply unnecessary, ramfs is internally much simpler.
Line 24: Line 15:
One advantage of the text file is that root access is not required to
+set permissions or create device nodes in a directory. (Note that those two example "file" entries expect to find files Named "init.sh" and "busybox" in a directory called "initramfs", under the linux-2.6.* directory. See Documentation/early-userspace/README for more details.)
One downside of ramfs is you can keep writing data into it until you fill up all memory, and the VM can't free it because the VM thinks that files +should get written to backing store (rather than swap space), but ramfs hasn't got any backing store. Because of this, only root (or a trusted user) should be allowed write access to a ramfs mount.
Line 27: Line 17:
If you don't already understand what shared libraries, devices, and paths you need to get a minimal root filesystem up and running, here are some references:
 * http://www.tldp.org/HOWTO/Bootdisk-HOWTO/
 * http://www.tldp.org/HOWTO/From-PowerUp-To-Bash-Prompt-HOW...
 * http://www.linuxfromscratch.org/lfs/view/stable/
 
The ["klibc"] package (http://www.kernel.org/pub/linux/libs/klibc) is
designed to be a tiny C library to statically link early userspace
code against, along with some related utilities. One can use ["uClibc"] and ["busybox"]. (In theory you could use ["glibc"], but that's not well suited for small embedded usage. Also note that glibc dlopens libnss to do name lookups, even when otherwise statically linked.)

diff -ru old/Documentation/initrd.txt new/Documentation/initrd.txt
--- old/Documentation/initrd.txt 2005-09-09 21:42:58.000000000 -0500
+++ new/Documentation/initrd.txt 2005-10-17 22:38:41.447859392 -0500
@@ -1,3 +1,6 @@

NOTE: New systems should probably be using initramfs instead of initrd. See Documentation/filesystems/ramfs-rootfs-initramfs.txt for details.

CategoryKernel
A ramfs derivative called tmpfs was created to add size limits, and the ability to write the data to swap space. Normal users can be allowed write access to tmpfs mounts. See documentation/filesystems/tmpfs.txt for more information.

Ramfs is a very simple ["filesystem"] that exports Linux's disk cacheing mechanisms (the page cache and dentry cache) as a dynamically resizable ram-based filesystem.

Normally all files are cached in memory by Linux. Pages of data read from backing store (usually the block device the filesystem is mounted on) are kept around in case it's needed again, but marked as clean (freeable) in case the Virtual Memory system needs the memory for something else. Similarly, data written to files is marked clean as soon soon as it has been written to backing store, but kept around for cacheing purposes until the VM reallocates the memory. A similar mechanism (the dentry cache) greatly speeds up access to directories.

With ramfs, there is no backing store. Files written into ramfs allocate dentries and page cache as usual, but there's nowhere to write them to. This means the pages are never marked clean, so they can't be freed by the VM when it's looking to recycle memory.

The amount of code required to implement ramfs is tiny, because all the work is done by the existing Linux cacheing infrastructure. Basically, you're mounting the disk cache as a filesystem. Because of this, ramfs is not an optional component removable via menuconfig, since there would be negligible space savings.

The older "ram disk" mechanism created a synthetic block device out of an area of ram and used it as backing store for a filesystem. This block device was of a fixed size, so the filesystem mounted on it was a fixed size. Using a ram disk also required unnecessarily copying memory from the fake block device into the page cache (and copying changes back out), as well as creating and destroying dentries. Plus it needed a filesystem driver (such as ext2) to format and interpret this data. This wastes memory, creates unnecessary work for the CPU, wastes memory bus bandwidth, and pollutes the CPU caches. (There are tricks to avoid this copying by playing with the page tables, but they're unpleasantly complicated and turn out to be about as expensive as the copying anyway.)

More to the point, all the work ramfs is doing has to happen _anyway_, since all file access goes through the page and dentry caches. The ram disk is simply unnecessary, ramfs is internally much simpler.

One downside of ramfs is you can keep writing data into it until you fill up all memory, and the VM can't free it because the VM thinks that files +should get written to backing store (rather than swap space), but ramfs hasn't got any backing store. Because of this, only root (or a trusted user) should be allowed write access to a ramfs mount.

A ramfs derivative called tmpfs was created to add size limits, and the ability to write the data to swap space. Normal users can be allowed write access to tmpfs mounts. See documentation/filesystems/tmpfs.txt for more information.