Linux Containers (LXC) provide a Free Software virtualization system for computers running GNU/Linux. This is accomplished through kernel level isolation using cgroups (control groups) and namespaces. It allows one to run multiple virtual units simultaneously. Those units are similar to chroot but, isolated and utilize available resources efficiently, as they run on the same kernel.
Official upstream LXC documentation and help is available here; in particular, see the Getting Started page for an introduction to LXC containers.
Full support for LXC (including userspace tools) is available since the Debian 6.0 "Squeeze" release, and the newer LXD tooling packaging is anticipated to be part of the upcoming Debian 12.0 "Bookworm" release.
You can also read some sub pages :
/CGroupV2 /CGroupV2/Discussion /Discussion /LibVirtDefaultNetwork /SimpleBridge /UpgradingHostFromScratchToBuster /VlanNetworking |
Contents
- Using this document
- Supported versions of LXC
- Installation
- Networking
- Creating containers
- External mounts inside the container
- Common hints
- Start and stop containers
- Command Line Access To Containers
- Preparing host system WITHOUT Systemd for running LXC
- Support
- References
- Debian-specific information
Using this document
When looking for documentation, howtos and tutorials, please check which LXC version they apply to, as things might have changed. The 2.1 release, for example, changes the configuration file structure in several ways.
The rest of this page will describe LXC in the currently stable Debian release. Other Debian/LXC releases are documented in subpages of this document (see the top of this page). The work to move information about other than stable releases off into its own subpages is in progress.
Supported versions of LXC
LXC (upstream) has the following releases:
Version |
EOL |
In Debian release |
3.0 LTS |
Buster |
|
4.0 LTS |
Bullseye |
|
5.0 LTS |
Bookworm (anticipated) |
Installation
Typically required packages are lxc, debootstrap and bridge-utils (the latter two are recommended by the lxc package). libvirt-bin is optional.
apt install lxc
Optionally:
apt install libvirt-bin
If you want LXC to run unprivileged container(s), the package requirements are slightly different.
apt-get install lxc libvirt0 libpam-cgfs bridge-utils uidmap
(Note: libpam-cgfs is unnecessary if host Linux uses pure CGroup V2.
Networking
See the upstream documentation for information about the various types of networking available in LXC.
Debian's packages do not ship any default network setup for containers:
$ head -n 1 /etc/lxc/default.conf lxc.network.type = empty
If you want to have network in your containers (and you usually do), you will have to either change the global default or configure each individual container. You will probably also have to setup a bridge, firewall and maybe DHCP (see below for details how to do this).
Please note that most container templates configure the first interface to use DHCP by default.
Since "Debian stretch" there are helper scripts called lxc-net that allow you to set up a simple bridge for your containers, providing a DHCP-and-NATed IPv4 network. IPv6 support currently (in Stretch, TODO: what about buster?) requires manual configuration.
For a complete manual setup without the convenience of lxc-net, see the networking section below.
Caveat on internet documentation:
There is much conflicting documentation due to differing versions. As a quick overview check out the "Networking Essentials" section below. This wiki may also be outdated.
Networking Essentials
Typically containers can be given their own hardware device (from the host, phys) or can have a virtual device (veth) that is either put directly on a bridged network device that it shares with the host (uses the same DHCP server and addresses as the host) or put on a bridged network device that is masqueraded to the outside world and that is given an internal subnet.
The first case (phys) requires a physical device and is therefore not often used. It gives the container an address on the host's subnet as the second case.
The second case (veth with host-shared bridge) turns the host's ethernet device into a bridge (makes it part of a bridge) and allows the container access to the external network allowing it to acquire a DHCP address on the same network that the host is on.
The third case (veth with independent bridge) is the use case of lxc-net (since LXC 2.0) and implies the use of a masqueraded subnet (e.g. 10.0.3.0 as is the default for lxc-net) on which the host takes address 10.0.3.1 and any container takes IPs between 10.0.3.2 and 10.0.3.255.
Further networking documentation
SimpleBridge explains both the host-shared bridge and the independent bridge (natted/routed).
VlanNetworking describes a VLAN + bridge setup.
LibVirtDefaultNetwork is said to provide for easy network setup. Using the libvirt package (old).
Host-shared bridge setup
Edit /etc/lxc/default.conf and change the following lines to enable networking for all containers:
lxc.net.0.type = veth lxc.net.0.link = virbr0 lxc.net.0.flags = up # you can leave these lines as they were: lxc.apparmor.profile = generated lxc.apparmor.allow_nesting = 1
Create the network bridge:
$ sudo apt-get install -y libvirt-clients libvirt-daemon-system iptables ebtables dnsmasq-base libxml2-utils iproute2 $ sudo virsh net-start default $ sudo virsh net-autostart default
libvirt-daemon-system contains a default bridge configuration.
If you do not want to use all the given defaults then ommit the -y flag above
you may want to consider using the --no-install-recommends flag, since libvirt-daemon will pull in the qemu package, which in turn will pull in a lot of GUI stuff which you don't need for LXC.
Destroy any existing containers and create them again.
Independent bridge setup
These are system-wide changes executed prior to creating your container:
Create /etc/default/lxc-net with the following line:
USE_LXC_BRIDGE="true"
This will source /usr/lib/x86_64-linux-gnu/lxc/lxc-net, which contains a default networking configuration that will assign your bridge the subnet 10.0.3.0/24. You can change these values if you want in the /etc/default/lxc-net file.
There is an Ubuntu patch to /usr/lib/<architecture>/lxc/lxc-net that will automatically configure /etc/default/lxc-net with a subnet between 10.0.x.0 and 10.0.3.0 that is available on your system, by default 10.0.3.0. This is done on system boot if /etc/default/lxc-net is missing. To use the feature, you must delete /etc/default/lxc-net.
For other purposes, see /SimpleBridge#Using_lxc-net for values you can add yourself.
Edit /etc/lxc/default.conf and change the default
lxc.network.type = empty
to this:lxc.net.0.type = veth lxc.net.0.link = lxcbr0 lxc.net.0.flags = up lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
This will create a template for newly created containers.
Run sudo service lxc-net restart.
Newly created containers now have the above configuration. This means they will be using the lxcbr0 bridge created by the lxc-net service.
This bridge is where your containers attach themselves to. Your bridge is now automatically created (at boot-up) and your newly created containers are configured to use this bridge. Existing containers can be configured by using the above configuration or by editing /var/lib/lxc/<container>/config.
This is the same setup as the one on LXC/SimpleBridge that contains some good default values for this kind of setup. The Host device as bridge section contains an alternate setup which employs a bridge created out of the main network device of the host system, as detailed above. This is not something that can be created by lxc-net but it is something you could use if you do not want Masquerading to take place and you want your containers to be on the external network.
Creating containers
Privileged Vs. Unprivileged Containers
LXC supports two types of containers: privileged and unprivileged. Upstream explains:
- LXC containers can be of two kinds:
- Privileged containers
- Unprivileged containers
Enabling the creation of the recommended unprivileged containers requires some preliminary manual configuration, as explained below. (The following is taken from various version of README.Debian in lxc; see there for more information, and see 925899 for some of the background and technical details.) See also the section below titled "Unprivileged Containers" for additional important information.
Configuration Necessary For Unprivileged Containers
Enable Unprivileged User Namespaces
Default Debian kernels since 5.10+ have unprivileged user namespaces enabled. To check, run this command:
# sysctl kernel.unprivileged_userns_clone kernel.unprivileged_userns_clone = 1
If it reports 0 instead 1, it's disabled. To enable it, append kernel.unprivileged_userns_clone=1 to /etc/sysctl.conf, or to a file such as /etc/sysctl.d/unpriv-usernd.conf, then run sysctl -p.
Configure AppArmor
In .config/lxc/default.conf, set one of the following:
lxc.apparmor.profile = unconfined
lxc.apparmor.profile = lxc-container-default-cgns
This step can also be done in the newly created container's configuration (the setting in .config/lxc/default.conf will only work for subsequently created containers).
Networking
From README.Debian: The easiest way to setup networking is to use lxc-net, which is enabled by default for containers started by root. For non-root unprivileged containers, you need to allow your non-root user to create virtual network interfaces with:
# echo myusername veth lxcbr0 10 >> /etc/lxc/lxc-usernet
.
Container Creation
In this step your container is downloaded using debootstrap and a minimum Debian system is installed on your rootfs location (/var/lib/lxc/<container>/rootfs). After this, your container is ready to be run, it is already completed.
Rootfs location - along with many other settings - can be configured per container (after container is created) if required.
lxc-create -n <name> -t debian -- -r stretch
The -r stands for "release". You can also install other releases. The -r is a parameter that is passed to Debian's LXC script (template). It causes Stretch to be downloaded as the minimum "debootstrap" Debian system.
<name> is the name you give your container. It can be anything, as you like.
Alternatively you can specify the language (locale) as required(?) and additionally you can specify the mirror to use for debootstrap in this way:
LANG=C SUITE=stretch MIRROR=http://httpredir.debian.org/debian lxc-create -n debian9 -t debian
This also passes "stretch" as an environment variable instead of as a parameter to the script (template). Scripts and templates are found in /usr/share/lxc/templates/.
External mounts inside the container
By default only the container's filesystem is mounted inside the container (even if on the host, /var/lib/lxc/mycontainer/rootfs has other mount points).
To mount another filesystem in the container, add to /var/lib/lxc/mycontainer/config:
lxc.mount.entry=/path/in/host/mount_point mount_point_in_container none bind 0 0
Another bind mount example:
# Exposes /dev/sde in the container lxc.mount.entry = /dev/sde dev/sde none bind,optional,create=file
To mount in another filesystem (for example LVM) to a container mount point
lxc.mount.entry = /dev/mapper/lvmfs-home-partition home ext4 defaults 0 2
NOTE that it is critical to have no leading "/" in the container mount point (making it a relative mount point).
Mounts in unprivileged containers
When a container is unprivileged, UID or GID of a mounted device has to be root in the container. To make sure this, do chgrp 100000 /dev/nvidiactl etc. on the host (assuming GID 100000 is container's root group).
On the other hand, when host's /home is mounted in an unprivileged container by
lxc.mount.entry = /home home none bind,rw 0 0
its UID/GID cannot be altered. To enable UID 1000 in an unprivileged container to access files of UID 1000 in /home on the host, we have to adjust UID/GID mapping between the host and the container as follows:
# Container's UID/GID 0-65535 are mapped to host's 100000-165535, # but UID/GID 1000 on the container is mapped to host's UID/GID 1000. lxc.idmap = u 0 100000 1000 lxc.idmap = g 0 100000 1000 lxc.idmap = u 1000 1000 1 lxc.idmap = g 1000 1000 1 lxc.idmap = u 1001 101001 64535 lxc.idmap = g 1001 101001 64535
Common hints
root passwords
In LXC releases from 2.0.8 onward no root passwords are set by default.
If you need to set the password of a container (because you forgot the random one, or want to adjust the default), you can do so with lxc-attach -n <container> passwd.
Caveats
containers not running with full device permissions (which is the default, restricted) spew out systemd errors in the container as systemd tries to set all devices as available (or even not-available); these messages can be turned off by setting /etc/systemd/journald.conf to "?MaxLevelStore=6", if that doesn't work the cgroup also needs to be auto-mounted as "ro" (instead of "mixed") (needs confirmation).
lxc-checkconfig Multiple /dev/pts instances: missing? Don't be alarmed: https://edmondscommerce.github.io/fedora-24-lxc-multiple-/dev/pts-instances-missing/
- defaults.conf has no mechanism to substitute the hostname in to configuration files. That means while networking can have automatically assigned values for hwaddr its impossible to for default.conf to express "containers should be logged to /var/log/lxc-container-$HOSTNAME.log".
Use of lxc on Debian hosts in the unified CGroup hierarchy (pure CGroup V2 hierarchy) is explained in CGroupV2.
Start and stop containers
Notes/warnings on starting and stopping containers:
When you connect to a container console (via lxc-console), lxc will let you know how to quit it. The first time you log in however, getty may clear the screen, so you'll probably miss this bit of information:
Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself
If you're using screen and also use the Ctrl+a command prefix, type <Ctrl+a a q> to exit the console.
When you start the container in foreground mode (with -F), there's apparently no way to quit the terminal (<Ctrl+a q> doesn't work). Make sure you start the containers in background mode (the default), unless you need to debug why a container didn't start.
Actual commands:
To start a container in the background and attach to the console at any time later:
lxc-start -n myvm lxc-console -n myvm
To start a container in foreground mode and stay attached to the console (see warning above):
lxc-start -F -n myvm
To stop a container without proper halt inside the container:
lxc-stop -k -n myvm
To have containers automatically started on booting the host, edit their config file and add:
lxc.start.auto = 1
If your container is defined in a non-default path (e.g. you used the -P option to lxc-create), you must symlink their config file to /etc/lxc/auto/:
ln -s /var/lib/lxc/mycontainer/config /etc/lxc/auto/mycontainer
Command Line Access To Containers
There are two main methods to get command line access to containers:
lxc-attach
- ssh
`lxc-attach`
lxc-attach -n my-container is the simplest method to get command line access to a container. One complication is that getting the environment configured sanely can be tricky. lxc-attach has two mutually exclusive options: --keep-env and --clear-env. The former keeps the current environment for attached programs, while the latter clears the environment before attaching, so no undesired environment variables leak into the container (see `man lxc-attach` for more information). The former is the current default behavior, "but is is likely to change in the future, since this may leak undesirable information into the container." In addition to leaking undesirable information, keeping the current environment variables can also result in a broken environment. For example, if a non-root user starts an unprivileged container with --keep-env, $HOME inside the container will remain set to the user's home directory on the host - which will not even exist in the container.
Running scripts designed for normal environments in an lxc-attach session can thus be tricky. For example, the pi-hole basic installation script will fail in a session with --keep-env (the default), since it will try to access $HOME and fail, since this will not exist, as above. On the other hand, the installation script will also fail in a session with --clear-env, with the error TERM environment variable needs set. A solution in this case is to run a session with something like the following: lxc-attach --clear-env --keep-var TERM.
ssh
A more standardized method to get command line access to containers, which may avoid the above complications with the environment, is via ssh. In at least some templates (including Debian ones), ssh access is not configured by default, but setting it up is relatively simple. Here are instructions for Debian templates:
- Attach to the container, and run:
apt install openssh-server
mkdir /root/.ssh
chmod 700 /root/.ssh
touch /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
On your regular system, open your public key file (e.g. $HOME/.ssh/id_rsa.pub) in a text editor and copy the key to the clipboard.
Inside the container, open /root/.ssh/authorized_keys in a text editor (vi is installed in Debian template installations), and paste in the key from the clipboard.
You should now be able to ssh in to root@ip_address_of_the_container.
Preparing host system WITHOUT Systemd for running LXC
You can skip this section if you are running systemd (default). Your host(system) is already prepared.
Systems running sysv need to be primed for the running of LXC on it as it required cgroups to be mounted (among other things, perhaps).
The best solution is to install package cgroupfs-mount.
Note
At least on Debian 10 (buster), lxc-checkconfig still complains that Cgroup v1 systemd controller: missing and containers with systemd refuse to start.
According to Gentoo wiki, if you want to run containers with systemd, the host also needs a name=systemd cgroup hierarchy mounted: mkdir -p /sys/fs/cgroup/systemd; mount -t cgroup -o none,name=systemd systemd /sys/fs/cgroup/systemd.
With that, lxc-checkconfig no longer complains and containers with systemd are able to start.
This was reported to package cgroupfs-mount in bug #939435 (patch included).
If package cgroupfs-mount is not available, add this line to /etc/fstab. (This is not necessary if libvirt-bin is installed as init.d/libvirt-bin will mount /sys/fs/cgroup automatically).
cgroup /sys/fs/cgroup cgroup defaults 0 0
Try to mount it (a reboot solves an eventual "resource busy problem" in any case)
mount /sys/fs/cgroup
Check kernel configuration :
# lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-2.6.32-5-amd64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: missing Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
Above the lxc-checkconfig program is reporting "Cgroup memory controller: missing". If you want memory control via cgroups then you need to recompile the linux kernel. In order to avoid problems when using memory limit settings during startup of a container, you must add cgroup_enable=memory to the kernel command line (Jessie or later). This applies even if Cgroup memory controller reports "enabled"
Be aware mounting cgroup from /etc/fstab has side-effects, like being unable to edit network manager connections.
Support
To discuss about LXC and Debian LXC : LXC mailing list (On Gmane)
References
Debian-specific information
See also :
https://blog.rot13.org/2010/03/lxc-watchdog_missing_bits_for_openvz_-_linux_containers_migration.html which describes a tool that allows controlling the guest's startup/shutdown through power signals, and also some more setup for consoles.
CategorySoftware | CategoryVirtualization | CategorySystemAdministration