Translation(s): none


This page describes a universal optimized SSD setup with an encrypted rootfs and swap on three raid disks. Please improve this guide on your way through this task. Implementing it, you may leave out any part that you don't require to simplify your setup.

/!\ An important aspect in optimizing SSD performance is the file system and partition alignment (as with "advanced format" hardisks: 1 MiB borders aligned to the 4096 byte blocks of the hardware). This wiki page does not cover these issues.

Prerequisites

Partitioning Scheme

A commonly recommendable setup to do serious work on a desktop/laptop includes at least three disks:

/!\ Note that if you use USB devices, instead of (e)SATA, you may still have to work around 624343

If you want the removable disks to resync fast (only the required changes) and the write performance is not too crucial for you, you can daisy chain several raid devices with individual write-intent bitmaps.

The chain on md1 may be visualized like this:

  md1 --- md10 --- md1(...) --- md1n --- iHDD
   |       |        |            |
   SSD     eHDD     oHDD(...)    oHDDn

Data is written to md1. If we look at the physical devices, we have on one side the SSD as a member of md1 that does not have a bitmap to avoid excessive SSD wear. On the other side we have the iHDD and it will hold the bitmaps of all md devices in between. If the oHDDs are disconnected the most of the time (only connected temporarily to get synced) and we put them to the right, we can most of the time avoid that md10 bitmap updates are written to other external disks.

The filesystem descriptions below do not include oHDDs. If you have one, setup an additional mdX1 raid 1 array (that contains a partition on the iHDD and one on the oHDD, and has a write intent bitmap). These mdX1 arrays then replace the iHDD partition in the details presented below.

The partitions on the HDDs should be ordered as in the following list to keep frequently accessed areas closer together (reduce seeks):

If /var is synced to a persistent ramdisk to avoid excessive wear on the SSD (or remains only on the HDDs), a failed SSD can still be fully reconstructed from HDD. However, a failed HDD can only be fully reconstructed from an external HDD mirror, or from a backup. (Your latest work-data can nevertheless be reconstructed from the SSD, even if the iHDD and with it the operating system suddenly stops working without the eHDD attached.)

The filesystems in more detail:

350MB boot-fs ( md0(SSD, iHDD, eHDD) mounted at /boot): If you want to encrypt the rootfs you need to encrypt the swap partition, and create this separate 350MB boot-fs (md0).

20GB root-fs (md1_crypt opened from md1(SSD, md10(iHDD, eHDD) ) mounted at /):

15 GB var-fs (md2_crypt opened from md2(iHDD, eHDD) mounted at /var) It allows you to see how variable /var actually is, by experiencing hdd spin-ups in addition to when saving to work-data (even if the root-fs/home-fs HDD raid members are write-buffered/disabled).

1,x * amount of installed RAM as swap ( md3(iHDD, eHDD) )

Optionally, 5GB home-fs (md4_crypt opened from md4(SSD, md40(HDD, eHDD) mounted at /home): Even if you do not require a raid mirror for the boot- and root-fs, you may still want at least a small separate home-fs raid (different from the work-data-fs), because it allows to reduce HDD spin-ups without the general write buffering risks: The HDD can be removed from home-fs raid (or write buffered) if on battery, while updates are still written to SSD imediately. (And updates to the work-data-fs continue to be written to the HDD.) Create the home-fs raid with a few GBs mounted as /home, to contain (mostly only) the configuration files, lists of most recently used files, desktop (environment) databases, etc. that don't warrant to spin up the hdd on every change. Then you may be remove the hdd from that raid if on battery.

Still, even if you can prevent hdd spin-ups with this, to reduce the wear on the SSD caused by programs that are constantly updating logs, desktop databases or state files etc. in /home, you will have to use a persistent ramdisk (see profile-sync and goanything below) for those files (or the complete /home).

100GB work-data-fs (md5_crypt opened from md5(SSD, md50(HDD, eHDD) mounted at /mnt/work-data) Using this only for /home/*/work-data allows to keep this raid mirror fully active while the hdd in the root-fs or home-fs raid is write-buffered or disabled. Thus writes to most recently used lists, browser caches, etc. do not wake up the hdd, but saving your work does.

remaining GBs for large bulk-data-fs (md6_crypt opened from md6(HDD, eHDD) mounted at/mnt/bulk-data):

Reducing writes to solid state disks "SSDs" or (laptop) hard disk drives "HDDs"

To stop constantly changing files from hitting on the ssd directly:

Use a throwaway /tmp ramdisk (tmpfs), to completely avoid unnecessary writes: debian: Set RAMTMP, RAMRUN and RAMLOCK to "yes" (in /etc/default/rcS or /etc/default/tmpfs since wheezy) ubuntu: /etc/fstab: tmpfs /tmp noatime,nosuid 0 0

Use persistent ramdisks (dedicated read/write RAM buffer that gets synced periodically and on startup/shutdown) to accumulate sdd-writes and hdd spin-ups.

With anything-sync-daemon or goanysync set up:

Options to having logs copied into RAM: http://www.debian-administration.org/articles/661, http://www.tremende.com/ramlog, https://github.com/graysky2/anything-sync-daemon (if it supports this), or https://github.com/wor/goanysync

If /home is not on a persistent ramdisk, use profile-sync-daemon to have the browser database and cache copied into RAM during uptime (http://ubuntuforums.org/showthread.php?t=1921800 https://github.com/graysky2/profile-sync-daemon)

Further improvement: Patch anything-sync-daemon or goanysync to use a (copy-on-write) union filesystem mount (e.g. http://aufs.sourceforge.net) to keep changes in RAM and only save to SSD on unmount/shutdown (aubrsync), instead of copying all data to RAM and having to sync it all back.

Alternative to persistent ramdisk:

Low-Latency IO-Scheduler

The default scheduler queues data to minimize seeks on HDDs, which is not necessary for SSDs. Thus, use the deadline scheduler that just ensures bulk transactions won't slow down small transactions: Install sysfsutils and

(adjust sdX to match your SSD) reboot or

Other Options for SSDs

The performance of SSDs can also be influenced by these:

Note that using discard with on-disk-cryptogrpahy (like dm-crypt) also has drawbacks with respect to security/cryptography! See crypttab(5).

dm-crypt's /etc/crypttab:

#<target name>    <source device>            <key file>  <options>
var  UUID=01234567-89ab-cdef-0123-456789abcdef  none  luks,discard

More: http://siduction.org/index.php?module=news&func=display&sid=78 http://forums.debian.net/viewtopic.php?f=16&t=76921 https://wiki.archlinux.org/index.php/SSD http://wiki.ubuntuusers.de/SSD

/etc/fstab

# /etc/fstab: static file system information.
#
# Use 'vol_id --uuid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
### SSD: discard,noatime
### match battery operation default for commit JOURNAL_COMMIT_TIME_AC in Add files in /etc/pm/config.d/*
/dev/mapper/goofy-root /               ext4    discard,noatime,commit=600,errors=remount-ro 0       1
# /boot was on /dev/sda1 during installation
UUID=709cbe4a-80c1-46cb-8bb1-dbce3059d1f7 /boot           ext4    discard,noatime,commit=600,defaults        0       2
### SSD: discard
/dev/mapper/goofy-swap none            swap    sw,discard              0       0
/dev/mapper/goofy-chroot /srv/chroot         btrfs    ssd,discard,noatime 0       2
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto     0       0

/etc/lvm/lvm.conf

...
# This section allows you to configure which block devices should
# be used by the LVM system.
devices {
...
    # Issue discards to a logical volumes's underlying physical volume(s) when
    # the logical volume is no longer using the physical volumes' space (e.g.
    # lvremove, lvreduce, etc).  Discards inform the storage that a region is
    # no longer in use.  Storage that supports discards advertise the protocol
    # specific way discards should be issued by the kernel (TRIM, UNMAP, or
    # WRITE SAME with UNMAP bit set).  Not all storage will support or benefit
    # from discards but SSDs and thinly provisioned LUNs generally do.  If set
    # to 1, discards will only be issued if both the storage and kernel provide
    # support.
    # 1 enables; 0 disables.
    #issue_discards = 0
    issue_discards = 1
}
...

Smaller system with SSD

See