Differences between revisions 1 and 34 (spanning 33 versions)
Revision 1 as of 2011-11-23 15:03:49
Size: 3322
Editor: OsamuAoki
Comment:
Revision 34 as of 2012-12-11 20:56:43
Size: 14038
Comment:
Deletions are marked like this. Additions are marked like this.
Line 5: Line 5:
This describe SDD optimization with system having encrypted root and swap.

## If your page gets really long, uncomment this Table of Contents
This describes an SDD optimized setup that tries to be as universal as possible and provides an encrypted rootfs and swap on three disks. Please improve this guide on your way. You may leave out the parts you don't need to simplify it to your requirements.
Line 10: Line 9:
== SSD Optimization on the powerful laptop PC ==

For a PC with 256GB SSD with system having encrypted root and swap, system is mounted as:
/!\ An important aspect in optimizing SSD performance is the file system and partition alignment (1 MiB borders aligned to the 4096 byte blocks of the hardware). This wiki page does not cover these issues.

== Prerequisites ==

 * Use a recent Linux kernel. (3.2 or newer)
 * Have enough RAM to not need any swap space under normal workloads while maintaining most of the variable data in a persistent ramdisk that gets synced to disk periodically.
 * Do still set up a swap partition on a hdd, just in case and to be able to suspend to disk (hdd).
 * Use the "noatime" (or "relatime") mount option in /etc/fstab, to disable (or reduce) disk writes during each disk read access.
 * Use filesytems in the ext4 format.
 * Optionally, use the btrfs format (not yet stable). It supports additional mount options like "ssd" in /etc/fstab, that enables SSD optimized disk space allocation.

== Partitioning Scheme ==

A commonly recommendable setup to do serious work on a desktop/laptop includes at least three disks:

 * internal SSD (mirrors and speeds up the static part of the system, and the user's important work-data) We assume a 128GB SSD and only use about 120GB of it, to always have enough free blocks available (improve "overprovisioning") and avoid slow write performance.
 * internal HDD (contains the whole system)
 * external (removable) hdd (short eHDD) to (mirrors the whole system)
 * optionally: additional external (removable) hdds (aHDD) (to mirror the whole system, rolling-backup-style)


   With an additional aHDD, setup an additional mdX1 raid 1 array (that contains the partition on the internal HDD and one on the eHDD, and has a write intent bitmap). The mdX1 arrays then replace the internal HDD partitions in the mdX0 raids presented below.
   (These stacked raids with bitmaps will quickly mirror the whole system to the additional external HDDs, that may be attached one at a time "rolling-backup-style".)

The partitions on the HDDs should be ordered like this to keep frequently accessed areas close together (reduced seeks):

 * boot-fs (md0)
 * root-fs (md1)
 * var-fs (md2)
 * swap (md3)
 * home-fs (md4)
 * work-data-fs (md5)
 * bulk-data-fs (md6)


If /var is kept on a persistent ramdisk (or only directly on the HDD), to avoid excessive wear on the SSD, a failed ssd can be fully reconstructed from HDD. However, a failed HDD can only be fully reconstructed from an external eHDD mirror, or a backup. (Nevertheless, the presented scheme will still allow yout to reconstruct your latest work-data from the SSD if the internal HDD failed.)

The filesystems in more detail:

350MB boot-fs ( md0(SSD, HDD, eHDD) mounted at /boot):
If you want to encrypt the rootfs you need to encrypt the swap partition,
and create this separate 350MB boot-fs (md0).
 * raid 1 (SSD + HDD) with hdd for failure tolerance
 * no write intent bitmap to avoid SSD wear from frequent bitmap updates
 * Setting "echo writemostly > /sys/block/md0/md/dev-<HDD-PARTITION>/state" seemed to add the "W" flag, but not avoid slow/noisy/consuming reads from hdd.
Workaround: mdadm {--fail, --remove, --add} /dev/mdX --write-mostly /dev/sdXY
 * Use mdadm directly if disk-utility (palimpsest) gives errors.


20GB root-fs (md1_crypt opened from md1(SSD, md10(HDD, eHDD) ) mounted at /):
 * Keeps system separated from user data
 * Allows system mirroring to be temporarily write-buffered or disabled (on laptops if on the road) independently from user data mirroring (HDD idle/undocked)
 * syncing user data does not involve syncing system data (is faster)
 * md1 (SSD + md10) without a bitmap to avoid SSD wear from frequent bitmap updates
 * md10 (HDD, eHDD) with a bitmap to speed up syncs: mdadm --grow /dev/md10 -b internal
 * Make the HDD partitions a little (1MiB?) larger than on the SSD, to allow the resulting md10 to hold a the content of the main raid md1 ?
 


15 GB var-fs (md2_crypt opened from md2(HDD, eHDD) mounted at /var)
It allows you to see how variable /var actually is, by experiencing hdd spin-ups in addition to when saving to work-data (even if the root-fs/home-fs HDD raid members are write-buffered/disabled).
 * raid 1 (internal HDD + external HDD)
 * with write intent bitmap for faster resyncs


1,x * amount of installed RAM as swap ( md3(HDD, eHDD) )
 * ensures redundancy for swap space
 * without a write intent bitmap to avoid the write speed penalty

Optionally, 5GB home-fs (md4_crypt opened from md4(SSD, md40(HDD, eHDD) mounted at /home):
Even if you do not require a raid mirror for the boot- and root-fs, you may still want at least a small separate home-fs raid (different from the work-data-fs), because it allows to reduce HDD spin-ups without the general write buffering risks: The HDD can be removed from home-fs raid (or write buffered) if on battery, while updates are still written to SSD imediately. (And updates to the work-data-fs continue to be written to the HDD.)
Create the home-fs raid with a few GBs mounted as /home, to contain (mostly only) the configuration files, lists of most recently used files, desktop (environment) databases, etc. that don't warrant to spin up the hdd on every change. Then you may be remove the hdd from that raid if on battery.
 * raid 1 (SSD + HDD) with hdd for failure tolerance
 * same setup as root-fs
Still, even if you can prevent hdd spin-ups with this, to reduce the wear on the SSD caused by programs that are constantly updating logs, desktop databases or state files etc. in /home, you will have to use a persistent ramdisk (see profile-sync and goanything below) for those files (or the complete /home).

100GB work-data-fs (md5_crypt opened from md5(SSD, md50(HDD, eHDD) mounted at /mnt/work-data)
Using this only for /home/*/work-data allows to keep this raid mirror fully active while the hdd in the root-fs or home-fs raid is write-buffered or disabled. Thus writes to most recently used lists, browser caches, etc. do not wake up the hdd, but saving your work does.
 * raid 1 (SSD + HDD) with hdd for failure tolerance
 * same setup as above
 * Optionally, SSD + md-hdd (raid 1 with bitmap of an internal + external HDD)
  * ~/work-data (symlink into work-data-fs, or mountpoint on single-user systems)
  * ~/bulk-data (symlink into bulk-data-fs)
  * ~/volatile (transient RAM buffer synced to home-fs)


remaining GB large bulk-data-fs (md6_crypt opened from md6(HDD, eHDD) mounted at/mnt/bulk-data):
 * raid 1 (internal HDD + external HDD)
 * with write intent bitmap to speed up syncs:





== Reducing writes to solid state disks "SSDs" or (laptop) hard disk drives "HDDs" ==

To stop constantly changing files from hitting on the ssd directly:

Use a throwaway /tmp ramdisk (tmpfs), to completely avoid unnecessary writes:
debian: Set RAMTMP, RAMRUN and RAMLOCK to "yes" (in /etc/default/rcS or /etc/default/tmpfs since wheezy)
ubuntu: /etc/fstab: tmpfs /tmp noatime,nosuid 0 0
 /!\ RAMTMP will keep /tmp in RAM only, causing its content to be discarded on every shutdown! Using a persisten ramdisk (see below) or an increased commit interval shall reduce disk writes significantly without discarding data on a regular basis.


Use persistent ramdisks (dedicated read/write RAM buffer that gets synced periodically and on startup/shutdown) to accumulate sdd-writes and hdd spin-ups.
  
With anything-sync-daemon or goanysync set up:
 * /home (synced to work-data-fs raid only once a day?), you only risk settings the true work in /home/*/work-data is on a dedicated raid
 * /home/*/work-data/volatile (synced more frequently, once per hour?)
 * /home/*/Downloads (synced to bulk-data-fs once a day?)
 * /var completely if supported (syncing once a day? avoids spin-ups and allows to save /var also to SSD), at least set this up for
  * /var/log if suported
  * /var/cache/apt/archives
     Configure apt to delete package files after installing, to minimize the data to sync.

Options to having logs copied into RAM: http://www.debian-administration.org/articles/661, http://www.tremende.com/ramlog, https://github.com/graysky2/anything-sync-daemon (if it supports this), or https://github.com/wor/goanysync
  
 
If /home is not on a persistent ramdisk, use profile-sync-daemon to have the browser database and cache copied into RAM during uptime (http://ubuntuforums.org/showthread.php?t=1921800 https://github.com/graysky2/profile-sync-daemon)
 * /home/*/<browser-cache-and-profiles> (synced to root-fs or home-fs)


Further improvement: Patch anything-sync-daemon or goanysync to use a (copy-on-write) union filesystem mount (e.g. http://aufs.sourceforge.net) to keep changes in RAM and only save to SSD on unmount/shutdown (aubrsync), instead of copying all data to RAM and having to sync it all back.



Alternatives to persistent ramdisk:
 
 * Make system only flush data to the disk every 10 minutes or more:
 /!\ Attention: Increasing the flushing interval from the default 5 seconds (maybe even until proper shutdown) leaves your data much more vulnerable in case of lock-ups or power failures, and seems to be a global setting.
  * Manually set "commit=600" mount option in /etc/fstab. See mount(8).
  * Or better, set up pm-utils ([[http://bugs.debian.org/659260|Debian BTS #659260]]) or laptop-mode-tools (also optimizes read buffers) to enable laptop-mode even under AC operation.


== Optimized IO-Scheduler ==

The default scheduler queues data to minimize seeks on HDDs, which is not necessary for SSDs. Thus, use the deadline scheduler that just ensures bulk transactions won't slow down small transactions: Install sysfsutils and
  echo "/sys/block/sdX/queue/scheduler = deadline" > /etc/sysfs.conf
(adjust sdX to match your SSD) reboot or
  echo deadline > /sys/block/sdX/queue/scheduler

== Other Options for SSDs ==

The performance of SSDs can also be influenced by these:

 * Maybe enable the "discard" filesystem options for automatic/online TRIM. However this is not strictly necessary if your SSD has enough overprovisioning (spare space) or you leave (unpartitioned) free space on the SSD (http://www.spinics.net/lists/raid/msg40866.html). Enabling online-trim in fstab may just slow down some SSDs signficantly (https://patrick-nagel.net/blog/archives/337).
  * Set "discard" mount option in /etc/fstab for the ext4 filesystem, swap partition, Btrfs, etc. See mount(8).
  * Set "issue_discard" option in /etc/lvm/lvm.conf for LVM. See lvm.conf(5).
  * Set "discard" option in /etc/crypttab for dm-crypt.
'''Note that using discard with on-disk-cryptogrpahy (like dm-crypt) also has drawbacks with respect to security/cryptography!''' See crypttab(5).

dm-crypt's /etc/crypttab:
Line 15: Line 162:
$ mount
/dev/mapper/crypto-root on / type ext4 (rw,noatime,discard,commit=600,errors=remount-ro,commit=600)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,size=5242880,mode=755,size=5242880,mode=755)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=755,size=10%,mode=755)
tmpfs on /tmp type tmpfs (rw,noexec,nosuid,nodev,noatime,mode=1777,size=20%,mode=1777,mode=1777)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,size=20%,mode=1777,size=20%,mode=1777)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620,gid=5,mode=620)
/dev/sda1 on /boot type ext4 (rw,noatime,discard,commit=600,commit=600)
tmpfs on /var/log type tmpfs (rw,noexec,nosuid,nodev,noatime,mode=1777)
tmpfs on /home/username/.cache type tmpfs (rw,noexec,nosuid,nodev,noatime,uid=1000,gid=1000,mode=700)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev)
#<target name> <source device> <key file> <options>
var UUID=01234567-89ab-cdef-0123-456789abcdef none luks,discard
Line 32: Line 165:

This is done with the foloowings.

/etc/fstab:
 You'll also need to update your initramfs: `update-initramfs -u -k all`


 * Optionally, set up an offline-trim cronjob that runs {{{time fstrim -v }}} (or mdtrim) on the ssd mountpoints periodically. Until software raid (md device layer) has trim support, you could use something like mdtrim (https://github.com/Cyberax/mdtrim/).


More: http://siduction.org/index.php?module=news&func=display&sid=78 http://forums.debian.net/viewtopic.php?f=16&t=76921 https://wiki.archlinux.org/index.php/SSD http://wiki.ubuntuusers.de/SSD



== /etc/fstab ==
Line 44: Line 185:
proc /proc proc defaults 0 0
Line 47: Line 187:
/dev/mapper/crypto-root / ext4 discard,noatime,commit=600,errors=remount-ro 0 1 /dev/mapper/goofy-root / ext4 discard,noatime,commit=600,errors=remount-ro 0 1
Line 49: Line 189:
UUID=709abe4b-81c1-96cb-8ac1-dace3159a1b7 /boot ext4 discard,noatime,commit=600,defaults 0 2 UUID=709cbe4a-80c1-46cb-8bb1-dbce3059d1f7 /boot ext4 discard,noatime,commit=600,defaults 0 2
Line 51: Line 191:
/dev/mapper/crypto-swap_1 none swap sw,discard 0 0 /dev/mapper/goofy-swap none swap sw,discard 0 0
/dev/mapper/goofy-chroot /srv/chroot btrfs ssd,discard,noatime 0 2
Line 53: Line 194:
### SSD OPTIMIZATION: /home/osamu/.cache on tmpfs
tmpfs /tmp tmpfs rw,noexec,nodev,nosuid,noatime,mode=1777 0 0
tmpfs /usr/tmp tmpfs rw,noexec,nodev,nosuid,noatime,mode=1777 0 0
tmpfs /var/log tmpfs rw,noexec,nodev,nosuid,noatime,mode=1777 0 0
tmpfs /home/osamu/.cache tmpfs rw,noexec,nodev,nosuid,noatime,uid=1000,gid=1000,mode=700 0 0
Line 60: Line 196:
/etc/pm/config.d/SET_JOURNAL_COMMIT_TIME_AC == /etc/lvm/lvm.conf ==
Line 62: Line 199:
JOURNAL_COMMIT_TIME_AC=600 ...
# This section allows you to configure which block devices should
# be used by the LVM system.
devices {
...
    # Issue discards to a logical volumes's underlying physical volume(s) when
    # the logical volume is no longer using the physical volumes' space (e.g.
    # lvremove, lvreduce, etc). Discards inform the storage that a region is
    # no longer in use. Storage that supports discards advertise the protocol
    # specific way discards should be issued by the kernel (TRIM, UNMAP, or
    # WRITE SAME with UNMAP bit set). Not all storage will support or benefit
    # from discards but SSDs and thinly provisioned LUNs generally do. If set
    # to 1, discards will only be issued if both the storage and kernel provide
    # support.
    # 1 enables; 0 disables.
    #issue_discards = 0
    issue_discards = 1
}
...
Line 65: Line 220:

Line 70: Line 227:
 *

Translation(s): none


This describes an SDD optimized setup that tries to be as universal as possible and provides an encrypted rootfs and swap on three disks. Please improve this guide on your way. You may leave out the parts you don't need to simplify it to your requirements.

/!\ An important aspect in optimizing SSD performance is the file system and partition alignment (1 MiB borders aligned to the 4096 byte blocks of the hardware). This wiki page does not cover these issues.

Prerequisites

  • Use a recent Linux kernel. (3.2 or newer)
  • Have enough RAM to not need any swap space under normal workloads while maintaining most of the variable data in a persistent ramdisk that gets synced to disk periodically.
  • Do still set up a swap partition on a hdd, just in case and to be able to suspend to disk (hdd).
  • Use the "noatime" (or "relatime") mount option in /etc/fstab, to disable (or reduce) disk writes during each disk read access.
  • Use filesytems in the ext4 format.
  • Optionally, use the btrfs format (not yet stable). It supports additional mount options like "ssd" in /etc/fstab, that enables SSD optimized disk space allocation.

Partitioning Scheme

A commonly recommendable setup to do serious work on a desktop/laptop includes at least three disks:

  • internal SSD (mirrors and speeds up the static part of the system, and the user's important work-data) We assume a 128GB SSD and only use about 120GB of it, to always have enough free blocks available (improve "overprovisioning") and avoid slow write performance.
  • internal HDD (contains the whole system)
  • external (removable) hdd (short eHDD) to (mirrors the whole system)
  • optionally: additional external (removable) hdds (aHDD) (to mirror the whole system, rolling-backup-style)
    • With an additional aHDD, setup an additional mdX1 raid 1 array (that contains the partition on the internal HDD and one on the eHDD, and has a write intent bitmap). The mdX1 arrays then replace the internal HDD partitions in the mdX0 raids presented below. (These stacked raids with bitmaps will quickly mirror the whole system to the additional external HDDs, that may be attached one at a time "rolling-backup-style".)

The partitions on the HDDs should be ordered like this to keep frequently accessed areas close together (reduced seeks):

  • boot-fs (md0)
  • root-fs (md1)
  • var-fs (md2)
  • swap (md3)
  • home-fs (md4)
  • work-data-fs (md5)
  • bulk-data-fs (md6)

If /var is kept on a persistent ramdisk (or only directly on the HDD), to avoid excessive wear on the SSD, a failed ssd can be fully reconstructed from HDD. However, a failed HDD can only be fully reconstructed from an external eHDD mirror, or a backup. (Nevertheless, the presented scheme will still allow yout to reconstruct your latest work-data from the SSD if the internal HDD failed.)

The filesystems in more detail:

350MB boot-fs ( md0(SSD, HDD, eHDD) mounted at /boot): If you want to encrypt the rootfs you need to encrypt the swap partition, and create this separate 350MB boot-fs (md0).

  • raid 1 (SSD + HDD) with hdd for failure tolerance
  • no write intent bitmap to avoid SSD wear from frequent bitmap updates
  • Setting "echo writemostly > /sys/block/md0/md/dev-<HDD-PARTITION>/state" seemed to add the "W" flag, but not avoid slow/noisy/consuming reads from hdd.

Workaround: mdadm {--fail, --remove, --add} /dev/mdX --write-mostly /dev/sdXY

  • Use mdadm directly if disk-utility (palimpsest) gives errors.

20GB root-fs (md1_crypt opened from md1(SSD, md10(HDD, eHDD) ) mounted at /):

  • Keeps system separated from user data
  • Allows system mirroring to be temporarily write-buffered or disabled (on laptops if on the road) independently from user data mirroring (HDD idle/undocked)
  • syncing user data does not involve syncing system data (is faster)
  • md1 (SSD + md10) without a bitmap to avoid SSD wear from frequent bitmap updates
  • md10 (HDD, eHDD) with a bitmap to speed up syncs: mdadm --grow /dev/md10 -b internal
  • Make the HDD partitions a little (1MiB?) larger than on the SSD, to allow the resulting md10 to hold a the content of the main raid md1 ?

15 GB var-fs (md2_crypt opened from md2(HDD, eHDD) mounted at /var) It allows you to see how variable /var actually is, by experiencing hdd spin-ups in addition to when saving to work-data (even if the root-fs/home-fs HDD raid members are write-buffered/disabled).

  • raid 1 (internal HDD + external HDD)
  • with write intent bitmap for faster resyncs

1,x * amount of installed RAM as swap ( md3(HDD, eHDD) )

  • ensures redundancy for swap space
  • without a write intent bitmap to avoid the write speed penalty

Optionally, 5GB home-fs (md4_crypt opened from md4(SSD, md40(HDD, eHDD) mounted at /home): Even if you do not require a raid mirror for the boot- and root-fs, you may still want at least a small separate home-fs raid (different from the work-data-fs), because it allows to reduce HDD spin-ups without the general write buffering risks: The HDD can be removed from home-fs raid (or write buffered) if on battery, while updates are still written to SSD imediately. (And updates to the work-data-fs continue to be written to the HDD.) Create the home-fs raid with a few GBs mounted as /home, to contain (mostly only) the configuration files, lists of most recently used files, desktop (environment) databases, etc. that don't warrant to spin up the hdd on every change. Then you may be remove the hdd from that raid if on battery.

  • raid 1 (SSD + HDD) with hdd for failure tolerance
  • same setup as root-fs

Still, even if you can prevent hdd spin-ups with this, to reduce the wear on the SSD caused by programs that are constantly updating logs, desktop databases or state files etc. in /home, you will have to use a persistent ramdisk (see profile-sync and goanything below) for those files (or the complete /home).

100GB work-data-fs (md5_crypt opened from md5(SSD, md50(HDD, eHDD) mounted at /mnt/work-data) Using this only for /home/*/work-data allows to keep this raid mirror fully active while the hdd in the root-fs or home-fs raid is write-buffered or disabled. Thus writes to most recently used lists, browser caches, etc. do not wake up the hdd, but saving your work does.

  • raid 1 (SSD + HDD) with hdd for failure tolerance
  • same setup as above
  • Optionally, SSD + md-hdd (raid 1 with bitmap of an internal + external HDD)
    • ~/work-data (symlink into work-data-fs, or mountpoint on single-user systems)
    • ~/bulk-data (symlink into bulk-data-fs)
    • ~/volatile (transient RAM buffer synced to home-fs)

remaining GB large bulk-data-fs (md6_crypt opened from md6(HDD, eHDD) mounted at/mnt/bulk-data):

  • raid 1 (internal HDD + external HDD)
  • with write intent bitmap to speed up syncs:

Reducing writes to solid state disks "SSDs" or (laptop) hard disk drives "HDDs"

To stop constantly changing files from hitting on the ssd directly:

Use a throwaway /tmp ramdisk (tmpfs), to completely avoid unnecessary writes: debian: Set RAMTMP, RAMRUN and RAMLOCK to "yes" (in /etc/default/rcS or /etc/default/tmpfs since wheezy) ubuntu: /etc/fstab: tmpfs /tmp noatime,nosuid 0 0

  • /!\ RAMTMP will keep /tmp in RAM only, causing its content to be discarded on every shutdown! Using a persisten ramdisk (see below) or an increased commit interval shall reduce disk writes significantly without discarding data on a regular basis.

Use persistent ramdisks (dedicated read/write RAM buffer that gets synced periodically and on startup/shutdown) to accumulate sdd-writes and hdd spin-ups.

With anything-sync-daemon or goanysync set up:

  • /home (synced to work-data-fs raid only once a day?), you only risk settings the true work in /home/*/work-data is on a dedicated raid
  • /home/*/work-data/volatile (synced more frequently, once per hour?)
  • /home/*/Downloads (synced to bulk-data-fs once a day?)
  • /var completely if supported (syncing once a day? avoids spin-ups and allows to save /var also to SSD), at least set this up for
    • /var/log if suported
    • /var/cache/apt/archives
      • Configure apt to delete package files after installing, to minimize the data to sync.

Options to having logs copied into RAM: http://www.debian-administration.org/articles/661, http://www.tremende.com/ramlog, https://github.com/graysky2/anything-sync-daemon (if it supports this), or https://github.com/wor/goanysync

If /home is not on a persistent ramdisk, use profile-sync-daemon to have the browser database and cache copied into RAM during uptime (http://ubuntuforums.org/showthread.php?t=1921800 https://github.com/graysky2/profile-sync-daemon)

  • /home/*/<browser-cache-and-profiles> (synced to root-fs or home-fs)

Further improvement: Patch anything-sync-daemon or goanysync to use a (copy-on-write) union filesystem mount (e.g. http://aufs.sourceforge.net) to keep changes in RAM and only save to SSD on unmount/shutdown (aubrsync), instead of copying all data to RAM and having to sync it all back.

Alternatives to persistent ramdisk:

  • Make system only flush data to the disk every 10 minutes or more:

    /!\ Attention: Increasing the flushing interval from the default 5 seconds (maybe even until proper shutdown) leaves your data much more vulnerable in case of lock-ups or power failures, and seems to be a global setting.

    • Manually set "commit=600" mount option in /etc/fstab. See mount(8).
    • Or better, set up pm-utils (Debian BTS #659260) or laptop-mode-tools (also optimizes read buffers) to enable laptop-mode even under AC operation.

Optimized IO-Scheduler

The default scheduler queues data to minimize seeks on HDDs, which is not necessary for SSDs. Thus, use the deadline scheduler that just ensures bulk transactions won't slow down small transactions: Install sysfsutils and

  • echo "/sys/block/sdX/queue/scheduler = deadline" > /etc/sysfs.conf

(adjust sdX to match your SSD) reboot or

  • echo deadline > /sys/block/sdX/queue/scheduler

Other Options for SSDs

The performance of SSDs can also be influenced by these:

  • Maybe enable the "discard" filesystem options for automatic/online TRIM. However this is not strictly necessary if your SSD has enough overprovisioning (spare space) or you leave (unpartitioned) free space on the SSD (http://www.spinics.net/lists/raid/msg40866.html). Enabling online-trim in fstab may just slow down some SSDs signficantly (https://patrick-nagel.net/blog/archives/337).

    • Set "discard" mount option in /etc/fstab for the ext4 filesystem, swap partition, Btrfs, etc. See mount(8).
    • Set "issue_discard" option in /etc/lvm/lvm.conf for LVM. See lvm.conf(5).
    • Set "discard" option in /etc/crypttab for dm-crypt.

Note that using discard with on-disk-cryptogrpahy (like dm-crypt) also has drawbacks with respect to security/cryptography! See crypttab(5).

dm-crypt's /etc/crypttab:

#<target name>    <source device>            <key file>  <options>
var  UUID=01234567-89ab-cdef-0123-456789abcdef  none  luks,discard
  • You'll also need to update your initramfs: update-initramfs -u -k all

  • Optionally, set up an offline-trim cronjob that runs time fstrim -v  (or mdtrim) on the ssd mountpoints periodically. Until software raid (md device layer) has trim support, you could use something like mdtrim (https://github.com/Cyberax/mdtrim/).

More: http://siduction.org/index.php?module=news&func=display&sid=78 http://forums.debian.net/viewtopic.php?f=16&t=76921 https://wiki.archlinux.org/index.php/SSD http://wiki.ubuntuusers.de/SSD

/etc/fstab

# /etc/fstab: static file system information.
#
# Use 'vol_id --uuid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
### SSD: discard,noatime
### match battery operation default for commit JOURNAL_COMMIT_TIME_AC in Add files in /etc/pm/config.d/*
/dev/mapper/goofy-root /               ext4    discard,noatime,commit=600,errors=remount-ro 0       1
# /boot was on /dev/sda1 during installation
UUID=709cbe4a-80c1-46cb-8bb1-dbce3059d1f7 /boot           ext4    discard,noatime,commit=600,defaults        0       2
### SSD: discard
/dev/mapper/goofy-swap none            swap    sw,discard              0       0
/dev/mapper/goofy-chroot /srv/chroot         btrfs    ssd,discard,noatime 0       2
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto     0       0

/etc/lvm/lvm.conf

...
# This section allows you to configure which block devices should
# be used by the LVM system.
devices {
...
    # Issue discards to a logical volumes's underlying physical volume(s) when
    # the logical volume is no longer using the physical volumes' space (e.g.
    # lvremove, lvreduce, etc).  Discards inform the storage that a region is
    # no longer in use.  Storage that supports discards advertise the protocol
    # specific way discards should be issued by the kernel (TRIM, UNMAP, or
    # WRITE SAME with UNMAP bit set).  Not all storage will support or benefit
    # from discards but SSDs and thinly provisioned LUNs generally do.  If set
    # to 1, discards will only be issued if both the storage and kernel provide
    # support.
    # 1 enables; 0 disables.
    #issue_discards = 0
    issue_discards = 1
}
...

Smaller system with SSD

See