Differences between revisions 1 and 62 (spanning 61 versions)
Revision 1 as of 2010-08-12 22:10:49
Size: 2777
Editor: FranklinPiat
Comment: Initial page
Revision 62 as of 2017-11-04 19:46:47
Size: 29791
Comment: Complete "mlocate issues" TODO item
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
##||<tablestyle="width: 100%;" style="border: 0px hidden">~-[[DebianWiki/EditorGuide#translation|Translation(s)]]: none-~||<style="text-align: right;border: 0px hidden"> (!) [[/Discussion|Discussion]]||
~-[[DebianWiki/EditorGuide#translation|Translation(s)]]: English - [[ru/Btrfs|Русский]]-~
----
Line 5: Line 5:
----
~+Btrfs+~ is intended to address the lack of pooling, snapshots, checksums and integral multi-device spanning in Linux file systems, these features being crucial as the use of Linux scales upward into larger storage configurations common in the enterprise. Btrfs was designed to be a multipurpose filesystem, scaling well on very large underlaying raw block devices.

Even though Btrfs have been in the kernel since 2.6.29, the developers states that ''as of 2.6.31, we only plan to make forward compatible disk format changes''. The developer still want to improve the user/management tools to make them easier to use. For more information about Btrfs, follow the links in [[#see-also|See Also]] section.

Compatibility:
 * Debian stable releases don't support Btrfs (i.e [[DebianLenny|Lenny]] earlier).
 * Ext2/3 filesystems should be upgradable to Btrfs (but not the other way around).


## If your page gets really long, uncomment this Table of Contents
## <<TableOfContents(2)>>

'''Btrfs''' is intended to address the lack of pooling, snapshots, checksums, and integral multi-device spanning in Linux file systems, these features being crucial as the use of Linux scales upward into larger storage configurations. Btrfs is designed to be a multipurpose filesystem, scaling well on very large block devices all the way down to cellular phones (Sailfish OS and Android).

<<TableOfContents(2)>>


== History ==
Btrfs has been part of the mainline Linux kernel since 2.6.29, and Debian's Btrfs support was introduced in DebianSqueeze.

In the future Ext2/3/4 filesystems will be upgradable to Btrfs. While a btrfs-convert utility has existed for some time, its use is presently not recommended. For the time being please backup, wipefs -a, mkfs.btrfs, and restore from backup.

"Google is evaluating btrfs for its potential use in android, but
currently the lack of native file-based encryption unfortunately makes
it a nonstarter" (Filip Bystricky, [[https://www.spinics.net/lists/linux-btrfs/msg66345.html|linux-btrfs]], 2017-06-09).
Line 19: Line 21:
=== Ext4 in Lenny ===
 * Debian [[DebianSqueeze|Squeeze]] supports Btrfs (kernel 2.6.32). Read upstream documentation for detail accurate statement from the developers.

=== Btrfs in Lenny/Etch ===
 * Debian [[DebianLenny|Lenny]], [[DebianEtch|Etch]] doesn't support Ext4. (neither kernel 2.6.18, EtchAndAHalf's 2.6.24 nor Lenny's 2.6.26)

=== Btrfs in Unstable ===
 ''See upstream wiki for more information about the latest kernel.''
Official upstream status is available on [[https://btrfs.wiki.kernel.org/index.php/Status|The Btrfs Wiki]].

The DebianInstaller can format and install to single-disk Btrfs volumes. The way that Btrfs combines multiple disks to create a single volume is not compatible with the data model of the current installer [[DebianBug:686097| (Bug #686097)]]. Daniel Pocock has a good article on how to [[http://danielpocock.com/install-debian-directly-with-btrfs-raid1|Install Debian wheezy and jessie directly with btrfs RAID1]]; however, strictly speaking it showcases Btrfs' integrated multi-device flexibility. eg: Install to a single disk, add a second disk to the volume, rebalance while converting all data and metadata to raid1 profile.

A Btrfs volume created on a raw partition is bootable using grub-pc. If booting with EFI firmware then consult Debian:UEFI for ESP partitioning requirements. Please note that if you boot using EFI and you would like your rootfs to be on btrfs, you '''must''' partition your drive[s]! It is highly recommended to use a swap partition rather than a manually configured swap file through a loop-device; classic swap files are not supported (''Btrfs Wiki'' [[https://btrfs.wiki.kernel.org/index.php/FAQ#Does_btrfs_support_swap_files.3F|Btrfs FAQ]]).

 In my opinion, Btrfs before linux-4.4 and btrfs-progs-v4.4 is too risky to use, and 4.4 was the point where one could stop worrying "is my btrfs volume going to mysteriously blow up tomorrow, even with a simple use-case". When using DebianJessie, please use a backported kernel and btrfs-tools from Debian:Backports. DebianStretch has good btrfs support out-of-the-box.

 If at some point in the Stretch life-cycle you need btrfs features enabled by a newer kernel I recommend exclusively using LTS kernels rather than tracking the latest version in backports, because tracking linux-image backport can result in bugs such as [[http://www.spinics.net/lists/linux-btrfs/msg56951.html|this one which forces a reboot]], or [[https://www.spinics.net/lists/linux-btrfs/msg67784.html|"Re: 4.11.6 / more corruption / root 15455 has a root item with a more recent gen (33682) compared to the found root node (0)]].
 DebianTesting and DebianUnstable are also affected. --NicholasDSteeves

The official upstream status since linux-4.6 has had its warnings of "Btrfs is under heavy development, and is not suitable for
any uses other than benchmarking and review" ([[https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/tree/Documentation/filesystems/btrfs.txt?id=refs/tags/v4.4.13|git.kernel.org]]) removed, and the consensus on the linux-btrfs mailing list seems to be that raid1 and raid10 profiles are now mature. Please refer to Debian:BackupAndRecovery if you do not yet have a backup strategy in place, and take care to regularly verify that your backups are restorable.

Because all reads are checksum-verified, Btrfs ensures that your backups are of higher quality than backups made from files stored on any other file system—ZFS similarly ensures data integrity. When you verify these backups and store two copies at two different offsite locations then your data is in the safest possible state. Here are some of Btrfs' shortcomings:


== Warnings ==
 * Do not use raid5 or raid6 profiles.

 * Do not use the linux-4.11.x; A number of users have reported corruption after upgrading to it (Ivan Sizov, [[https://www.spinics.net/lists/linux-btrfs/msg67784.html|"Re: 4.11.6 / more corruption / root 15455 has a root item with a more recent gen (33682) compared to the found root node (0)]], 2017-08-01)"

 * Quotas and qgroups are still broken and are implicated in filesystem corruption (Justin Maggard, ''[[https://www.spinics.net/lists/linux-btrfs/msg67788.html|linux-btrfs]]'', 2017-09-31).

 * Subvolumes cannot currently be mounted with different btrfs-specific options; the first btrfs line for a given volume in {{{/etc/fstab}}} takes effect. eg: you cannot mount / with noatime and /var with nodatacow,compress=lzo (''Btrfs Wiki,'' [[https://btrfs.wiki.kernel.org/index.php/Mount_options|Mount options]]).

 * At present, nodatacow implies nodatasum; this means that anything with the nodatacow attribute does not receive the benefits of btrfs' checksum protection and self-healing (for raid levels greater >= 1). Additionally, disabling CoW (Copy on Write) means that the a VM disk image will not be consistent if the host crashes or loses power. Consequently, it is almost always preferable to disable COW in the application.

 * Compress=lzo might be dangerous. In the 'linux-btrfs' thread [[https://www.spinics.net/lists/linux-btrfs/msg56563.html|"Trying to rescue my data]]" (2016-06-26) it has finally come to light that mounting with compress=lzo might be something that causes btrfs volumes to break, because "if it gets too many [csum errors] at once, it *does* unfortunately crash, despite the second copy being available and being just fine as later demonstrated by the scrub fixing the bad copy from the good one" (Duncan). Later in the thread Steven Haigh confirms the behaviour and suggests "maybe here lays a common problem"?

 * Mounting with -o compress will amplify fragmentation. All COW filesystems necessarily fragment. There is also a relation between the number of snapshots and the degree of fragmentation. Fragmentation manifests as higher than expected CPU usage on SSDs and increased read latency on rotational disks, because each of the references present in a frequently updated file will tend to necessitate a mechanical seek. Because the focus of btrfs development is currently on stabilisation, bug fixes, and core features, seek optimisation has not yet become a priority.

 * Mounting with -o autodefrag will duplicate reflinked or snapshotted files when you run a balance. Also, whenever a portion of the fs is defragmented with "btrfs filesystem defragment" those files will lose their reflinks and the data will be "duplicated" with n-copies. The effect of this is that volumes that make heavy use of reflinks or snapshots will run out of space. At this point in time, to avoid such unexpected surprises and for peace of mind, please minimize the use of snapshots, and use deduplicating backup software to store backups efficiently. And remember snapshots ≠ backups!

 * Any btrfs defrag operation could potentially duplicate reflinked or snapshotted blocks. Workaround this by minimizing reflink and snapshot use.

 * And others from ''The Btrfs Wiki''[[https://btrfs.wiki.kernel.org/index.php/Gotchas|Gotchas]]


=== Raid5 and Raid6 Profiles ===
 * "Do not use BTRFS raid6 mode in production, it has at least 2 known serious bugs that may cause complete loss of the array due to a disk failure. Both of these issues have as of yet unknown trigger conditions, although they do seem to occur more frequently with larger arrays" (Austin S. Hemmelgarn, 2016-06-03, [[http://permalink.gmane.org/gmane.comp.file-systems.btrfs/56944|linux-btrfs]]).

 * Do not use raid5 mode in production because, "RAID5 with one degraded disk won't be able to reconstruct data on this degraded disk because reconstructed extent content won't match checksum. Which kinda makes RAID5 pointless" (Andrei Borzenkov, 2016-06-24, [[https://www.spinics.net/lists/linux-btrfs/msg56479.html|linux-btrfs]]).

==== 2016-06-26 Update ====
Once again, please do not use btrfs' raid5 or raid6 profiles at this point in time! In the thread [[https://www.spinics.net/lists/linux-btrfs/msg56571.html|[BUG] Btrfs scrub sometime recalculate wrong parity in raid5]] Chris Murphy found the following while testing the btrfs raid5's ability to recover from csum errors:

 I just did it a 2nd time and both file's parity are wrong now. So I
 did it several more times. Sometimes both files' parity is bad.
 Sometimes just one file's parity is bad. Sometimes neither file's
 parity is bad.

 It's a very bad bug, because it is a form of silent data corruption
 and it's induced by Btrfs. And it's apparently non-deterministically
 hit (2016-06-26).

In another email in this thread, Duncan suggested "And what's even clearer is that people /really/ shouldn't be using raid56 mode for anything but testing with throw-away data, at this point. Anything else is simply irresponsible" ([[https://www.spinics.net/lists/linux-btrfs/msg56564.html|linux-btrfs]], 2016-06-26).


== Recommendations ==
Many people have reported years of btrfs usage without issue, and this wiki page will continue to be updated with configuration recommendations known to be good and cautions against those known to cause issues.

 1. Use two equally sized disks, partition them identically, and add each partition to a btrfs raid1 profile volume. Alternatively, use one disk for holding backups, because as of 2017-10-12 a significant benefit in throughput or iops is not gained by using btrfs raid1.
 1. Do not use compression. That said if one wants to test this functionality then zlib seems to have fewer issues than lzo.
 1. Do not use quotas/qgroups.
 1. Keep regular backups and use a backup program that supports deduplication (eg: borgbackup).
 1. Do not enable mount -o discard or autodefrag.
 1. Overprovision your disks so periodic trim won't be needed.
 1. Periodically run btrfs defrag against source subvolumes.
 1. Never run btrfs defrag against a child subvolume (eg: snapshots).
 1. Ensure that the number of snapshots per volume/filesystem never exceeds 12.
 1. Take care to not fill the volume beyond 90%. If this occurs it may become necessary to run periodic balances to consolidate free space into contiguous chunks. Also, performance will become less predictable.


== Maintenance ==
As a btrfs volume ages, you might notice performance degrade. This is because btrfs is a Copy On Write file system, and all COW filesystems eventually reach a heavily fragmented state; this includes ZFS. Over time, logs in /var/log/journal will become split across tens of thousands of extents. This is also the case for sqlite databases such as those that are used for Firefox and a variety of common desktop software. Fragmentation is a major contributing factor to why COW volumes become slower over time.

ZFS addresses the performance problems of fragmentation using an intelligent Adaptive Replacement Cache (ARC); the ARC requires massive amounts of RAM. Btrfs took a different approach, and benefits from—some would say requires—periodic defragmentation. In the future, maintenance of btrfs volumes on Debian systems will be automated using [[ https://github.com/kdave/btrfsmaintenance | btrfsmaintenance ]]. For now use: {{{

 sudo ionice -c idle btrfs filesystem defragment -t 32M -r $PATH

}}}

This command must be run as root, and it is recommended to ionice it to reduce the load on the system. To further reduce the IO load, flush data
after defragmenting each file using: {{{

 sudo ionice -c idle btrfs filesystem defragment -f -t 32M -r $PATH

}}}

Target extent size is a little known—but for practical purposes—absolutely essential argument. While the argument "-t 1G" would seem to be better than the "-t 32M" default, in practise this is not the case, because most volumes will have 1GiB chunk size. Additionally, if you have a lot of snapshots or reflinked files, please use "-f" to flush data for each file before going to the next file. Please consult the following thread for more information: [[https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg56147.html|Re: btrfs fi defrag does not defrag files >256kB?]]. Since btrfs-progs-4.9.1 "-t 32M" is default and no longer needs to be specified. eg: necessary for Stretch, unless using backported btrfs-progs.
Line 30: Line 116:
 Which package contains the tools? :: DebianPkg:btrfs-tools (in Squeeze and above)

See also upstream's [[https://btrfs.wiki.kernel.org/index.php/FAQ]]

== Documentations ==
  * {{{/usr/share/doc/linux-doc-2.6.XX/Documentation/filesystems/btrfs.txt.gz}}}, also available [[http://git.kernel.org/?p=linux/kernel/git/stable/linux-2.6.32.y.git;a=blob;f=Documentation/filesystems/btrfs.txt;|online for 2.6.32]]
  * Manpages: [[DebianMan:8/mkfs.btrfs|mkfs.btrfs(8)]], [[DebianMan:8/btrfsctl|btrfsctl(8)]], [[DebianMan:8/btrfs-show|btrfs-show(8)]], [[DebianMan:8/mkfs.btrfs|mkfs.btrfs(8)]], and other binaries from DebianPkg:btrfs-tools .
 * Linux's Btrfs wiki ~-<<BR>> [[https://btrfs.wiki.kernel.org/index.php/Main_Page]] -~
 * Introductions:
## *!KernelNewbies' Introduction to Ext4 ~-<<BR>> [[http://kernelnewbies.org/Ext4]]-~
  * Wikipedia's article about Btrfs ~-<<BR>> !WikiPedia: [[WikiPedia:Btrfs]]-~
 Which package contains the tools? :: DebianPkg:btrfs-tools in Debian 6 (squeeze) to Debian 8 (jessie), and btrfs-progs thereafter. Most interaction with Btrfs' advanced features requires these tools.

 Does btrfs really protect my data from hard drive corruption? :: Yes, but this requires at least two disks in raid1 profile. (eg: -m raid1 -d raid1). Without at least two copies of data, corruption can be detected but not corrected. Btrfs raid5 or raid6 profiles will '''not''' protect your data. Additionally, like for "mdadm or lvm raid, you need to make sure that the SCSI command timer (a kernel setting per block device) is longer than the drive's SCT ERC setting...If the command timer is shorter, bad sectors will not get reported as read errors for proper fixup, instead there will be a link reset and it's just inevitable there will be worse problems" (Chris Murphy, 2016-04-27, [[https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg53249.html|linux-btrfs]]). The Debian bug for this issue can be found [[https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=780162|here]]. For now do the following for all drives in the array, and then configure your system to change the SCSI command timer automatically on boot: {{{

cat /sys/block/<dev>/device/timeout
smartctl -l scterc /dev

# echo -n ((the scterc value)/10)+10 to /sys/block/<dev>/device/timeout
}}}
 The default value is 30 seconds, which should be fine for disks that support SCT and likely have low timeout values like 7 sec. For disks that fail smartctl -l scterc, and thus do not support SCT, set the timeout value to 120. Consider a timeout of 180 to be extra safe with large consumer-grade disks.
 Does it support SSD optimizations? :: Yes, Debian Jessie and later automatically detect non-rotational hard disks and {{{ssd}}} is added to the btrfs mount options. For more details on using SSDs with Debian, refer to Debian:SSDOptimization.

 What are the recommended options for installing on a pendrive, a SD card or a slow SSD drive? :: When installing, use ''manual partitioning'' and select btrfs as file system. In the first boot, edit {{{/etc/fstab}}} with this options, so you can expect a very good speed and responsiveness improvement (note that compression might cause issues -- NicholasDSteeves): {{{

/dev/sdaX / btrfs x-systemd.device-timeout=0,noatime,compress=lzo,commit=0,ssd_spread,autodefrag 0 0
}}}

 But I have a super-small pendrive and keep running out of space! Now what? :: Using another system, you can try something like this [[https://btrfs.wiki.kernel.org/index.php/FAQ#if_your_device_is_small|If Your Device is Small]] (note that compression might cause issues -- NicholasDSteeves): {{{

mkdir /tmp/pendrive
mount /dev/sdX -o noatime,ssd_spread,compress /tmp/pendrive
btrfs sub snap -r /tmp/pendrive /tmp/pendrive/tmp_snapshot
btrfs send /tmp/pendrive/tmp_snapshot > /tmp/pendrive_snapshot.btrfs
umount /tmp/pendrive

wipefs -a /dev/sdX
mkfs.btrfs --mixed /dev/sdX
mount /dev/sdX -o noatime,ssd_spread,compress /tmp/pendrive
btrfs receive -f /tmp/pendrive_snapshot.btrfs /tmp/pendrive
sync
btrfs fi sync /tmp/pendrive/

}}}

Now follow the procedure for converting a read-only snapshot to a live system and/or enabling / on a subvolume. Also, the bootloader needs to be reinstalled if your pendrive is a bootable OS drive and not just a data drive (''Needs to be written'' --NicholasDSteeves).

 Does it support compression? :: Yes, but consider this functionality experimental. Add {{{compress=lzo}}} or {{{compress=zlib}}} (depending on the desired level of compression vs speed, lzo being faster and zlib having more compression): {{{
/dev/sdaX / btrfs defaults,compress=lzo 0 1
}}}

 Change {{{/dev/sdaX}}} to your actual root device (UUID support in btrfs is a work-in-progress, but it works for mounting volumes; use the command blkid to get the UUID of all filesystems). If fact, there are many other more options you can add, just look at ''The Btrfs Wiki'' [[https://btrfs.wiki.kernel.org/index.php/Mount_options|Mount Options]]. (Remember: all fstab mount options must be comma separated but NOT space separated, so do not insert a space after the comma or the equal symbol).

 In order to check if you have written the options correctly before rebooting and therefore before being in trouble, run this command as root: {{{
mount -o remount /
}}}
 If no error is reported, everything is OK. Never try to boot with a troubled options fstab file or you'll have to manually try to recover it, a procedure that is more complicated.

 But if what you want is to just compress the files in a directory? :: You can do this by applying the following two commands (for example for {{{/var}}}): {{{
btrfs filesystem defragment -r -v -clzo /var
chattr +c /var
}}} By adding the {{{+c}}} attribute you ensure that any new file created inside the folder is compressed.

 What are the recommended options for a rotational hard disk? (note that compression might cause issues -- NicholasDSteeves) ::

 In /etc/fstab {{{
UUID=<the_device_uuid> /mount/point/ btrfs noauto,compress=lzo,noatime,autodefrag 0 0
}}}
 The noauto option will prevent the system to freeze at boot in the case of a non system and (likely) un-plugged device/partition. Alternatively, if you are using systemd and want to limit boot delay to 10 seconds in case of a missing device, and if that device is necessary for normal functioning of the system you can try this. System boot will halt with an error if the device is not found: {{{

UUID=<the_device_uuid> /mount/point btrfs x-systemd.device-timeout=10,noatime,compress=lzo,autodefrag 0 0
}}}
 (Consider revoking this recommendation, because autodefrag, like -o discard, can trigger buggy behaviour. Also consider revoking the compress=lzo recommendation for rotational disks, because while it increases throughput for sequentially written compressible data, it also magnifies fragmentation...which means lots more seeks and increased latency -- NicholasDSteeves)

 Can I encrypt a btrfs installation? :: Yes, you can by selecting ''manual partitioning'' and creating an encryption volume and then a btrfs file system on top of that. For the moment, btrfs does not support direct encryption so the installer uses ''cryptsetup'', but is a planned feature, and experimental patches have recently been submitted to enable this (Anand Jain, ''linux-btrfs'', [[https://www.spinics.net/lists/linux-btrfs/msg52565.html|Add btrfs encryption support]])

 Does it work on RaspberryPi? :: Yes, improving filesystem I/O responsiveness a lot. You may have to [[https://btrfs.wiki.kernel.org/index.php/Conversion_from_Ext3|convert the filesystem to btrfs first]] from a PC and change the {{{/etc/fstab}}} type of filesystem from {{{ext4}}} to {{{btrfs}}} (just by changing the name) before the first boot. Look above for recommended sdcard options in {{{/etc/fstab}}}.

 Fsck.btrfs doesn't do anything, how to I verify the integrity of my filesystem? :: Rather than a fsck, btrfs has two methods to detect and repair corruption. The first method executes as a background process for a mounted volume. It has a default IO priority of idle, and it strives to minimize the impact on other active processes; nevertheless, like any IO-intensive background job, it is best to run it at a time when the system is not busy. To run it: {{{
btrfs scrub start /btrfs_mountpoint
}}} To monitor its progress: {{{
btrfs scrub status /btrfs_mountpoint
}}} The second method checks an '''umounted''' filesystem. It verifies that the metadata and filesystem structures of the volume are intact and uncorrupted. It should not usually be necessary to run this type of check. Please note that it runs read-only; this is by design, and there are usually better methods to recover a corrupted btrfs volume than to use the dangerous "--repair" option. Please do not use "--repair" unless someone has assured you that it is absolutely necessary. To run a standard read-only metadata and filesystem structures verification: {{{
btrfs check -p /dev/sdX }}} or {{{
btrfs check -p /dev/disk/by-partuuid/UUID
}}}

 Is there anything I can do to improve system responsiveness while running a scrub, balance, or defrag? :: Yes, but only if the CFQ scheduler is enabled for the affected btrfs drives, because the "idle" ionice class requires the CFQ scheduler. {{{
cat /sys/block/sdX/queue/scheduler
# Should return "noop anticipatory deadline [cfq]" for rotational disks
# If it does not, then
echo -n cfq > /sys/block/sdX/queue/scheduler
}}}
 Use your preferred method to make this permanent (eg: /etc/rc.local, or a udev rule).

 Btrfs makes my desktop slow, is there anything I can do to restore a snappy feeling? :: Yes, but at the cost of greater slowdown during scrub, balance, and defrag. (I use this for both my SSD and two 4200RPM disk btrfs raid1 --NicholasDSteeves) {{{
cat /sys/block/sdX/queue/scheduler
# Should return "noop anticipatory [deadline] cfq" if deadline is enabled
# If it does not, then
echo -n deadline > /sys/block/sdX/queue/scheduler
}}}
 Use your preferred method to make this permanent (eg: /etc/rc.local, or a udev rule).

 How can I quickly check to see if my btrfs volume has experienced errors, with per-device accounting of any possible errors? :: If you have a new enough copy of btrfs-progs you get an at-a-glance overview of all devices in your pool by running the following: {{{
btrfs dev stats /btrfs_mountpoint
}}} For a healthy two device raid1 volume this command will output something like: {{{
[/dev/sdb1].write_io_errs 0
[/dev/sdb1].read_io_errs 0
[/dev/sdb1].flush_io_errs 0
[/dev/sdb1].corruption_errs 0
[/dev/sdb1].generation_errs 0
[/dev/sdc1].write_io_errs 0
[/dev/sdc1].read_io_errs 0
[/dev/sdc1].flush_io_errs 0
[/dev/sdc1].corruption_errs 0
[/dev/sdc1].generation_errs 0
}}}

 COW on COW: Don't do it! :: This includes overlayfs, unionfs, databases that do their own COW, certain cowbuilder configurations, and virtual machine disk images. Please disable COW in the application if possible. For example, for QEMU, refer to [[DebianMan:1/qemu-img|qemu-img(1)]] and take care to use raw images. If this is not possible, you can disable COW on a single directory like this {{{
mkdir directory
chattr +C directory
}}}

 New files in this directory will inherit the nodatacow attribute. Alternatively, nodatacow can be applied to a single file, but only for empty files {{{
touch file
chattr +C file
}}}
 Please read earlier warning about using nodatacow. If your application supports integrity checks and/or self-healing, you will want to enable them if you use nodatacow for that application...but that might not be enough if you lose a whole disk!

 What happens if I mix differently sized disks in raid1 profile? :: "RAID1 (and transitively RAID10) guarantees two copies on different disks, always. Only dup allows the copies to reside on the same disk. This is guaranteed is preserved, even when n=2k+1 and mixed-capacity disks. If disks run out of available chunks to satisfy the redundancy profile, the result is ENOSPC and requires the administrator to balance the file system before new allocations can succeed. The question essentially is asking if Btrfs will spontaneously degrade into "dup" if chunks cannot be allocated on some devices. That will never happen." (Justin Brown, 2016-06-03, [[https://mail-archive.com/linux-btrfs@vger.kernel.org/msg54443.html|linux-btrfs]]).

 Why doesn't updatedb index /home when /home is on its own subvolume? :: Consult this thread on [[https://www.spinics.net/lists/linux-btrfs/msg70930.html|linux-btrfs]]. The workaround I use is to have each top-level subvolume (id=5 or subvol=/) mounted at /btrfs-admin/$LABEL, where /btrfs-admin is root:sudo 750, and this is what I use in /etc/updatedb.conf: {{{
PRUNE_BIND_MOUNTS="no"
PRUNENAMES=".git .bzr .hg .svn"
PRUNEPATHS="/tmp /var/spool /media /btrfs-admin /var/cache /var/lib/lxc"
PRUNEFS="NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs
iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre tmpfs
usbfs udf fuse.glusterfs fuse.sshfs curlftpfs"
}}}
 With the exception of LXC rootfss I have a flat subvolume structure under each subvol=/. These subvolumes are mounted at specific mountpoints using fstab. Given that updatedb and locate work flawlessly, and that I've only had two issues (freespacecache) while using LTS kernels, I'm inclined to conclude that this is the least disruptive configuration. If I used snapper I'd add it to PRUNEPATHS and rely on its facilities to find files that had been deleted, because I don't want to see n-duplicates-for-file when I use locate. A user who wanted to see those duplicates could remove the path from PRUNEPATHS.

== Old but still relevant References ==
The number of snapshots per volume ''and per subvolume'' must be carefully monitored and/or automatically pruned, because too many snapshots can wedge the filesystem into an out of space condition or gravely degrade performance (Duncan, 2016-02-16, [[http://www.spinics.net/lists/linux-btrfs/msg52131.html|linux-btrfs]]). There are also reports that IO becomes sluggish and lags with far fewer snapshots, eg: only 86/subvolume on linux-4.0.4; this might be fixed in a newer kernel (Pete, 2016-03-11, [[http://www.spinics.net/lists/linux-btrfs/msg52881.html|linux-btrfs]]).

== TODO ==
 * Write section explaining what btrfs' raid1 and raid10 profiles actually are eg: 2 copies distributed on n devices, and that adding more devices does not make more copies; adding devices increases the size of the volume, but both raid1 and raid10 profiles always only make 2 copies. Adding more devices to increase redundancy is what upstream calls "raid1 profile n-copies" and no one is currently working on implementing this functionality.
 * Add warning for current remount behavior when raid1 or raid10 experiences a failed devices. Does it still add chunks in profile=single, creating volume that has both degraded raid1 chunks and single chunks? If this still happens, then the volume locks to read-only the next time it is mounted.
 * Write HOWTO for sbuild + schroot + btrfs, either here or somewhere else. (where should it go? --NicholasDSteeves)
 * More explicitly, address the dangers of going snapshot crazy, or using a loose and easy snapper config, because performance crashes somewhere between at 250 and 300 snapshots per subvolume, and also sometimes wedges the volume into an unmountable state. (More recently I've read more conservative estimates of no more than a dozen snapshots per subvolume, with a limit of 250 subvolumes--including snapshots)
 * Warn about ways to innocently make a system unbootable, while experimenting
 * Write FAQ entry on "my array is so slow!" -- needs research into both bcache and upstream's recommendation of btrfs raid1 of raid0 (either mdraid or hardware raid) pairs.
 * Rewrite "Does it work on RaspberryPi?" to not use btrfs convert

== Documentation ==

 * Primary manpages: [[DebianMan:5/btrfs|btrfs(5)]]
 [[DebianMan:8/btrfs|btrfs(8)]]
 [[DebianMan:8/mkfs.btrfs|mkfs.btrfs(8)]]
 [[DebianMan:8/btrfs-balance|btrfs-balance(8)]]
 [[DebianMan:8/btrfs-device|btrfs-device(8)]]
 [[DebianMan:8/btrfs-filesystem|btrfs-filesystem(8)]]
 [[DebianMan:8/btrfs-property|btrfs-property(8)]]
 [[DebianMan:8/btrfs-scrub|btrfs-scrub(8)]]
 [[DebianMan:8/btrfs-show|btrfs-show(8)]]
 [[DebianMan:8/btrfs-subvolume|btrfs-subvolume(8)]]
 [[DebianMan:8/btrfstune|btrfstune(8)]],
 and others from DebianPkg:btrfs-progs.
 * [[https://btrfs.wiki.kernel.org/index.php/Main_Page#Documentation|Btrfs wiki: Documentation]]
 * [[WikiPedia:Btrfs]] on Wikipedia
Line 46: Line 280:
 * [[https://btrfs.wiki.kernel.org/index.php/FAQ|Btrfs wiki: FAQ]]
 * [[https://btrfs.wiki.kernel.org/index.php/Gotchas|Btrfs wiki: Gotchas]]


== Contact ==
 * Btrfs mailing list: linux-btrfs@vger.kernel.org
 * https://btrfs.wiki.kernel.org/index.php/Btrfs_mailing_list
 * IRC: [[irc://irc.freenode.net/btrfs|#btrfs]]

Translation(s): English - Русский


FileSystem > Btrfs

Btrfs is intended to address the lack of pooling, snapshots, checksums, and integral multi-device spanning in Linux file systems, these features being crucial as the use of Linux scales upward into larger storage configurations. Btrfs is designed to be a multipurpose filesystem, scaling well on very large block devices all the way down to cellular phones (Sailfish OS and Android).

History

Btrfs has been part of the mainline Linux kernel since 2.6.29, and Debian's Btrfs support was introduced in DebianSqueeze.

In the future Ext2/3/4 filesystems will be upgradable to Btrfs. While a btrfs-convert utility has existed for some time, its use is presently not recommended. For the time being please backup, wipefs -a, mkfs.btrfs, and restore from backup.

"Google is evaluating btrfs for its potential use in android, but currently the lack of native file-based encryption unfortunately makes it a nonstarter" (Filip Bystricky, linux-btrfs, 2017-06-09).

Status

Official upstream status is available on The Btrfs Wiki.

The DebianInstaller can format and install to single-disk Btrfs volumes. The way that Btrfs combines multiple disks to create a single volume is not compatible with the data model of the current installer (Bug #686097). Daniel Pocock has a good article on how to Install Debian wheezy and jessie directly with btrfs RAID1; however, strictly speaking it showcases Btrfs' integrated multi-device flexibility. eg: Install to a single disk, add a second disk to the volume, rebalance while converting all data and metadata to raid1 profile.

A Btrfs volume created on a raw partition is bootable using grub-pc. If booting with EFI firmware then consult UEFI for ESP partitioning requirements. Please note that if you boot using EFI and you would like your rootfs to be on btrfs, you must partition your drive[s]! It is highly recommended to use a swap partition rather than a manually configured swap file through a loop-device; classic swap files are not supported (Btrfs Wiki Btrfs FAQ).

The official upstream status since linux-4.6 has had its warnings of "Btrfs is under heavy development, and is not suitable for any uses other than benchmarking and review" (git.kernel.org) removed, and the consensus on the linux-btrfs mailing list seems to be that raid1 and raid10 profiles are now mature. Please refer to BackupAndRecovery if you do not yet have a backup strategy in place, and take care to regularly verify that your backups are restorable.

Because all reads are checksum-verified, Btrfs ensures that your backups are of higher quality than backups made from files stored on any other file system—ZFS similarly ensures data integrity. When you verify these backups and store two copies at two different offsite locations then your data is in the safest possible state. Here are some of Btrfs' shortcomings:

Warnings

  • Do not use raid5 or raid6 profiles.
  • Do not use the linux-4.11.x; A number of users have reported corruption after upgrading to it (Ivan Sizov, "Re: 4.11.6 / more corruption / root 15455 has a root item with a more recent gen (33682) compared to the found root node (0), 2017-08-01)"

  • Quotas and qgroups are still broken and are implicated in filesystem corruption (Justin Maggard, linux-btrfs, 2017-09-31).

  • Subvolumes cannot currently be mounted with different btrfs-specific options; the first btrfs line for a given volume in /etc/fstab takes effect. eg: you cannot mount / with noatime and /var with nodatacow,compress=lzo (Btrfs Wiki, Mount options).

  • At present, nodatacow implies nodatasum; this means that anything with the nodatacow attribute does not receive the benefits of btrfs' checksum protection and self-healing (for raid levels greater >= 1). Additionally, disabling CoW (Copy on Write) means that the a VM disk image will not be consistent if the host crashes or loses power. Consequently, it is almost always preferable to disable COW in the application.

  • Compress=lzo might be dangerous. In the 'linux-btrfs' thread "Trying to rescue my data" (2016-06-26) it has finally come to light that mounting with compress=lzo might be something that causes btrfs volumes to break, because "if it gets too many [csum errors] at once, it *does* unfortunately crash, despite the second copy being available and being just fine as later demonstrated by the scrub fixing the bad copy from the good one" (Duncan). Later in the thread Steven Haigh confirms the behaviour and suggests "maybe here lays a common problem"?

  • Mounting with -o compress will amplify fragmentation. All COW filesystems necessarily fragment. There is also a relation between the number of snapshots and the degree of fragmentation. Fragmentation manifests as higher than expected CPU usage on SSDs and increased read latency on rotational disks, because each of the references present in a frequently updated file will tend to necessitate a mechanical seek. Because the focus of btrfs development is currently on stabilisation, bug fixes, and core features, seek optimisation has not yet become a priority.
  • Mounting with -o autodefrag will duplicate reflinked or snapshotted files when you run a balance. Also, whenever a portion of the fs is defragmented with "btrfs filesystem defragment" those files will lose their reflinks and the data will be "duplicated" with n-copies. The effect of this is that volumes that make heavy use of reflinks or snapshots will run out of space. At this point in time, to avoid such unexpected surprises and for peace of mind, please minimize the use of snapshots, and use deduplicating backup software to store backups efficiently. And remember snapshots ≠ backups!
  • Any btrfs defrag operation could potentially duplicate reflinked or snapshotted blocks. Workaround this by minimizing reflink and snapshot use.
  • And others from The Btrfs WikiGotchas

Raid5 and Raid6 Profiles

  • "Do not use BTRFS raid6 mode in production, it has at least 2 known serious bugs that may cause complete loss of the array due to a disk failure. Both of these issues have as of yet unknown trigger conditions, although they do seem to occur more frequently with larger arrays" (Austin S. Hemmelgarn, 2016-06-03, linux-btrfs).

  • Do not use raid5 mode in production because, "RAID5 with one degraded disk won't be able to reconstruct data on this degraded disk because reconstructed extent content won't match checksum. Which kinda makes RAID5 pointless" (Andrei Borzenkov, 2016-06-24, linux-btrfs).

2016-06-26 Update

Once again, please do not use btrfs' raid5 or raid6 profiles at this point in time! In the thread [BUG] Btrfs scrub sometime recalculate wrong parity in raid5 Chris Murphy found the following while testing the btrfs raid5's ability to recover from csum errors:

  • I just did it a 2nd time and both file's parity are wrong now. So I did it several more times. Sometimes both files' parity is bad. Sometimes just one file's parity is bad. Sometimes neither file's parity is bad. It's a very bad bug, because it is a form of silent data corruption and it's induced by Btrfs. And it's apparently non-deterministically hit (2016-06-26).

In another email in this thread, Duncan suggested "And what's even clearer is that people /really/ shouldn't be using raid56 mode for anything but testing with throw-away data, at this point. Anything else is simply irresponsible" (linux-btrfs, 2016-06-26).

Recommendations

Many people have reported years of btrfs usage without issue, and this wiki page will continue to be updated with configuration recommendations known to be good and cautions against those known to cause issues.

  1. Use two equally sized disks, partition them identically, and add each partition to a btrfs raid1 profile volume. Alternatively, use one disk for holding backups, because as of 2017-10-12 a significant benefit in throughput or iops is not gained by using btrfs raid1.
  2. Do not use compression. That said if one wants to test this functionality then zlib seems to have fewer issues than lzo.
  3. Do not use quotas/qgroups.
  4. Keep regular backups and use a backup program that supports deduplication (eg: borgbackup).
  5. Do not enable mount -o discard or autodefrag.
  6. Overprovision your disks so periodic trim won't be needed.
  7. Periodically run btrfs defrag against source subvolumes.
  8. Never run btrfs defrag against a child subvolume (eg: snapshots).
  9. Ensure that the number of snapshots per volume/filesystem never exceeds 12.
  10. Take care to not fill the volume beyond 90%. If this occurs it may become necessary to run periodic balances to consolidate free space into contiguous chunks. Also, performance will become less predictable.

Maintenance

As a btrfs volume ages, you might notice performance degrade. This is because btrfs is a Copy On Write file system, and all COW filesystems eventually reach a heavily fragmented state; this includes ZFS. Over time, logs in /var/log/journal will become split across tens of thousands of extents. This is also the case for sqlite databases such as those that are used for Firefox and a variety of common desktop software. Fragmentation is a major contributing factor to why COW volumes become slower over time.

ZFS addresses the performance problems of fragmentation using an intelligent Adaptive Replacement Cache (ARC); the ARC requires massive amounts of RAM. Btrfs took a different approach, and benefits from—some would say requires—periodic defragmentation. In the future, maintenance of btrfs volumes on Debian systems will be automated using btrfsmaintenance. For now use:

 sudo ionice -c idle btrfs filesystem defragment -t 32M -r $PATH

This command must be run as root, and it is recommended to ionice it to reduce the load on the system. To further reduce the IO load, flush data after defragmenting each file using:

 sudo ionice -c idle btrfs filesystem defragment -f -t 32M -r $PATH

Target extent size is a little known—but for practical purposes—absolutely essential argument. While the argument "-t 1G" would seem to be better than the "-t 32M" default, in practise this is not the case, because most volumes will have 1GiB chunk size. Additionally, if you have a lot of snapshots or reflinked files, please use "-f" to flush data for each file before going to the next file. Please consult the following thread for more information: Re: btrfs fi defrag does not defrag files >256kB?. Since btrfs-progs-4.9.1 "-t 32M" is default and no longer needs to be specified. eg: necessary for Stretch, unless using backported btrfs-progs.

FAQ

Which package contains the tools?

btrfs-tools in Debian 6 (squeeze) to Debian 8 (jessie), and btrfs-progs thereafter. Most interaction with Btrfs' advanced features requires these tools.

Does btrfs really protect my data from hard drive corruption?

Yes, but this requires at least two disks in raid1 profile. (eg: -m raid1 -d raid1). Without at least two copies of data, corruption can be detected but not corrected. Btrfs raid5 or raid6 profiles will not protect your data. Additionally, like for "mdadm or lvm raid, you need to make sure that the SCSI command timer (a kernel setting per block device) is longer than the drive's SCT ERC setting...If the command timer is shorter, bad sectors will not get reported as read errors for proper fixup, instead there will be a link reset and it's just inevitable there will be worse problems" (Chris Murphy, 2016-04-27, linux-btrfs). The Debian bug for this issue can be found here. For now do the following for all drives in the array, and then configure your system to change the SCSI command timer automatically on boot:

cat /sys/block/<dev>/device/timeout
smartctl -l scterc /dev

# echo -n ((the scterc value)/10)+10 to /sys/block/<dev>/device/timeout
The default value is 30 seconds, which should be fine for disks that support SCT and likely have low timeout values like 7 sec. For disks that fail smartctl -l scterc, and thus do not support SCT, set the timeout value to 120. Consider a timeout of 180 to be extra safe with large consumer-grade disks.
Does it support SSD optimizations?

Yes, Debian Jessie and later automatically detect non-rotational hard disks and ssd is added to the btrfs mount options. For more details on using SSDs with Debian, refer to SSDOptimization.

What are the recommended options for installing on a pendrive, a SD card or a slow SSD drive?

When installing, use manual partitioning and select btrfs as file system. In the first boot, edit /etc/fstab with this options, so you can expect a very good speed and responsiveness improvement (note that compression might cause issues -- NicholasDSteeves):

/dev/sdaX / btrfs x-systemd.device-timeout=0,noatime,compress=lzo,commit=0,ssd_spread,autodefrag 0 0
But I have a super-small pendrive and keep running out of space! Now what?

Using another system, you can try something like this If Your Device is Small (note that compression might cause issues -- NicholasDSteeves):

mkdir /tmp/pendrive
mount /dev/sdX -o noatime,ssd_spread,compress /tmp/pendrive
btrfs sub snap -r /tmp/pendrive /tmp/pendrive/tmp_snapshot
btrfs send /tmp/pendrive/tmp_snapshot > /tmp/pendrive_snapshot.btrfs
umount /tmp/pendrive

wipefs -a /dev/sdX
mkfs.btrfs --mixed /dev/sdX
mount /dev/sdX -o noatime,ssd_spread,compress /tmp/pendrive 
btrfs receive -f /tmp/pendrive_snapshot.btrfs /tmp/pendrive
sync
btrfs fi sync /tmp/pendrive/

Now follow the procedure for converting a read-only snapshot to a live system and/or enabling / on a subvolume. Also, the bootloader needs to be reinstalled if your pendrive is a bootable OS drive and not just a data drive (Needs to be written --NicholasDSteeves).

Does it support compression?

Yes, but consider this functionality experimental. Add compress=lzo or compress=zlib (depending on the desired level of compression vs speed, lzo being faster and zlib having more compression):

/dev/sdaX /  btrfs defaults,compress=lzo 0 1

Change /dev/sdaX to your actual root device (UUID support in btrfs is a work-in-progress, but it works for mounting volumes; use the command blkid to get the UUID of all filesystems). If fact, there are many other more options you can add, just look at The Btrfs Wiki Mount Options. (Remember: all fstab mount options must be comma separated but NOT space separated, so do not insert a space after the comma or the equal symbol).

In order to check if you have written the options correctly before rebooting and therefore before being in trouble, run this command as root:

mount -o remount /
If no error is reported, everything is OK. Never try to boot with a troubled options fstab file or you'll have to manually try to recover it, a procedure that is more complicated.
But if what you want is to just compress the files in a directory?

You can do this by applying the following two commands (for example for /var):

btrfs filesystem defragment -r -v -clzo /var
chattr +c /var

By adding the +c attribute you ensure that any new file created inside the folder is compressed.

What are the recommended options for a rotational hard disk? (note that compression might cause issues -- NicholasDSteeves)

In /etc/fstab

UUID=<the_device_uuid> /mount/point/ btrfs noauto,compress=lzo,noatime,autodefrag 0 0

The noauto option will prevent the system to freeze at boot in the case of a non system and (likely) un-plugged device/partition. Alternatively, if you are using systemd and want to limit boot delay to 10 seconds in case of a missing device, and if that device is necessary for normal functioning of the system you can try this. System boot will halt with an error if the device is not found:

UUID=<the_device_uuid> /mount/point btrfs x-systemd.device-timeout=10,noatime,compress=lzo,autodefrag 0 0
(Consider revoking this recommendation, because autodefrag, like -o discard, can trigger buggy behaviour. Also consider revoking the compress=lzo recommendation for rotational disks, because while it increases throughput for sequentially written compressible data, it also magnifies fragmentation...which means lots more seeks and increased latency -- NicholasDSteeves)
Can I encrypt a btrfs installation?

Yes, you can by selecting manual partitioning and creating an encryption volume and then a btrfs file system on top of that. For the moment, btrfs does not support direct encryption so the installer uses cryptsetup, but is a planned feature, and experimental patches have recently been submitted to enable this (Anand Jain, linux-btrfs, Add btrfs encryption support)

Does it work on RaspberryPi?

Yes, improving filesystem I/O responsiveness a lot. You may have to convert the filesystem to btrfs first from a PC and change the /etc/fstab type of filesystem from ext4 to btrfs (just by changing the name) before the first boot. Look above for recommended sdcard options in /etc/fstab.

Fsck.btrfs doesn't do anything, how to I verify the integrity of my filesystem?

Rather than a fsck, btrfs has two methods to detect and repair corruption. The first method executes as a background process for a mounted volume. It has a default IO priority of idle, and it strives to minimize the impact on other active processes; nevertheless, like any IO-intensive background job, it is best to run it at a time when the system is not busy. To run it:

btrfs scrub start /btrfs_mountpoint

To monitor its progress:

btrfs scrub status /btrfs_mountpoint

The second method checks an umounted filesystem. It verifies that the metadata and filesystem structures of the volume are intact and uncorrupted. It should not usually be necessary to run this type of check. Please note that it runs read-only; this is by design, and there are usually better methods to recover a corrupted btrfs volume than to use the dangerous "--repair" option. Please do not use "--repair" unless someone has assured you that it is absolutely necessary. To run a standard read-only metadata and filesystem structures verification:

btrfs check -p /dev/sdX 

or

btrfs check -p /dev/disk/by-partuuid/UUID
Is there anything I can do to improve system responsiveness while running a scrub, balance, or defrag?

Yes, but only if the CFQ scheduler is enabled for the affected btrfs drives, because the "idle" ionice class requires the CFQ scheduler.

cat /sys/block/sdX/queue/scheduler
# Should return "noop anticipatory deadline [cfq]" for rotational disks
# If it does not, then
echo -n cfq > /sys/block/sdX/queue/scheduler
Use your preferred method to make this permanent (eg: /etc/rc.local, or a udev rule).
Btrfs makes my desktop slow, is there anything I can do to restore a snappy feeling?

Yes, but at the cost of greater slowdown during scrub, balance, and defrag. (I use this for both my SSD and two 4200RPM disk btrfs raid1 --NicholasDSteeves)

cat /sys/block/sdX/queue/scheduler
# Should return "noop anticipatory [deadline] cfq" if deadline is enabled
# If it does not, then
echo -n deadline > /sys/block/sdX/queue/scheduler
Use your preferred method to make this permanent (eg: /etc/rc.local, or a udev rule).
How can I quickly check to see if my btrfs volume has experienced errors, with per-device accounting of any possible errors?

If you have a new enough copy of btrfs-progs you get an at-a-glance overview of all devices in your pool by running the following:

btrfs dev stats /btrfs_mountpoint

For a healthy two device raid1 volume this command will output something like:

[/dev/sdb1].write_io_errs   0
[/dev/sdb1].read_io_errs    0                                                   
[/dev/sdb1].flush_io_errs   0                                                   
[/dev/sdb1].corruption_errs 0                                                   
[/dev/sdb1].generation_errs 0                                                   
[/dev/sdc1].write_io_errs   0                                                   
[/dev/sdc1].read_io_errs    0                                                   
[/dev/sdc1].flush_io_errs   0                                                   
[/dev/sdc1].corruption_errs 0                                                   
[/dev/sdc1].generation_errs 0
COW on COW: Don't do it!

This includes overlayfs, unionfs, databases that do their own COW, certain cowbuilder configurations, and virtual machine disk images. Please disable COW in the application if possible. For example, for QEMU, refer to qemu-img(1) and take care to use raw images. If this is not possible, you can disable COW on a single directory like this

mkdir directory
chattr +C directory

New files in this directory will inherit the nodatacow attribute. Alternatively, nodatacow can be applied to a single file, but only for empty files

touch file
chattr +C file
Please read earlier warning about using nodatacow. If your application supports integrity checks and/or self-healing, you will want to enable them if you use nodatacow for that application...but that might not be enough if you lose a whole disk!
What happens if I mix differently sized disks in raid1 profile?

"RAID1 (and transitively RAID10) guarantees two copies on different disks, always. Only dup allows the copies to reside on the same disk. This is guaranteed is preserved, even when n=2k+1 and mixed-capacity disks. If disks run out of available chunks to satisfy the redundancy profile, the result is ENOSPC and requires the administrator to balance the file system before new allocations can succeed. The question essentially is asking if Btrfs will spontaneously degrade into "dup" if chunks cannot be allocated on some devices. That will never happen." (Justin Brown, 2016-06-03, linux-btrfs).

Why doesn't updatedb index /home when /home is on its own subvolume?

Consult this thread on linux-btrfs. The workaround I use is to have each top-level subvolume (id=5 or subvol=/) mounted at /btrfs-admin/$LABEL, where /btrfs-admin is root:sudo 750, and this is what I use in /etc/updatedb.conf:

PRUNE_BIND_MOUNTS="no"
PRUNENAMES=".git .bzr .hg .svn"
PRUNEPATHS="/tmp /var/spool /media /btrfs-admin /var/cache /var/lib/lxc"
PRUNEFS="NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs
iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre tmpfs
usbfs udf fuse.glusterfs fuse.sshfs curlftpfs"
With the exception of LXC rootfss I have a flat subvolume structure under each subvol=/. These subvolumes are mounted at specific mountpoints using fstab. Given that updatedb and locate work flawlessly, and that I've only had two issues (freespacecache) while using LTS kernels, I'm inclined to conclude that this is the least disruptive configuration. If I used snapper I'd add it to PRUNEPATHS and rely on its facilities to find files that had been deleted, because I don't want to see n-duplicates-for-file when I use locate. A user who wanted to see those duplicates could remove the path from PRUNEPATHS.

Old but still relevant References

The number of snapshots per volume and per subvolume must be carefully monitored and/or automatically pruned, because too many snapshots can wedge the filesystem into an out of space condition or gravely degrade performance (Duncan, 2016-02-16, linux-btrfs). There are also reports that IO becomes sluggish and lags with far fewer snapshots, eg: only 86/subvolume on linux-4.0.4; this might be fixed in a newer kernel (Pete, 2016-03-11, linux-btrfs).

TODO

  • Write section explaining what btrfs' raid1 and raid10 profiles actually are eg: 2 copies distributed on n devices, and that adding more devices does not make more copies; adding devices increases the size of the volume, but both raid1 and raid10 profiles always only make 2 copies. Adding more devices to increase redundancy is what upstream calls "raid1 profile n-copies" and no one is currently working on implementing this functionality.
  • Add warning for current remount behavior when raid1 or raid10 experiences a failed devices. Does it still add chunks in profile=single, creating volume that has both degraded raid1 chunks and single chunks? If this still happens, then the volume locks to read-only the next time it is mounted.
  • Write HOWTO for sbuild + schroot + btrfs, either here or somewhere else. (where should it go? --NicholasDSteeves)
  • More explicitly, address the dangers of going snapshot crazy, or using a loose and easy snapper config, because performance crashes somewhere between at 250 and 300 snapshots per subvolume, and also sometimes wedges the volume into an unmountable state. (More recently I've read more conservative estimates of no more than a dozen snapshots per subvolume, with a limit of 250 subvolumes--including snapshots)
  • Warn about ways to innocently make a system unbootable, while experimenting
  • Write FAQ entry on "my array is so slow!" -- needs research into both bcache and upstream's recommendation of btrfs raid1 of raid0 (either mdraid or hardware raid) pairs.
  • Rewrite "Does it work on RaspberryPi?" to not use btrfs convert

Documentation

See also

Contact


CategoryKernel