Translation(s): English - Français

(!) Discussion


Linux Containers (LXC) provide a Free Software virtualization system for computers running GNU/Linux. This is accomplished through kernel level isolation using cgroups (control groups) and namespaces. It allows one to run multiple virtual units simultaneously. Those units are similar to chroot but, isolated and utilize available resources efficiently, as they run on the same kernel.

Official upstream LXC documentation and help is available here; in particular, see the Getting Started page for an introduction to LXC containers.

Full support for LXC (including userspace tools) is available since the Debian 6.0 "Squeeze" release, and the newer LXD tooling packaging is available since the Debian 12.0 "Bookworm" release.

You can also read some sub pages :

Using this document

When looking for documentation, howtos and tutorials, please check which LXC version they apply to, as things might have changed. The 2.1 release, for example, changes the configuration file structure in several ways.

The rest of this page will describe LXC in the currently stable Debian release. Other Debian/LXC releases are documented in subpages of this document (see the top of this page). The work to move information about other than stable releases off into its own subpages is in progress.

Supported versions of LXC

LXC (upstream) has the following releases:

Version

EOL

In Debian release

3.0 LTS

June 1st 2023

Buster

4.0 LTS

June 1st 2025

Bullseye

5.0 LTS

June 1st 2027

Bookworm

Installation

Typically required packages are lxc, debootstrap and bridge-utils (the latter two are recommended by the lxc package). libvirt-bin is optional.

Optionally:

If you want LXC to run unprivileged container(s), the package requirements are slightly different.

(Note: libpam-cgfs is unnecessary if host Linux uses pure CGroup V2.

Known issues

Networking

See lxc.container.conf(5) § NETWORK for information about the various types of networking available in LXC.

Debian's packages do not ship any default network setup for containers:

If you want to have network in your containers (and you usually do), you will have to either change the global default or configure each individual container. You will probably also have to setup a bridge, firewall and maybe DHCP (see below for details how to do this).

Please note that most container templates configure the first interface to use DHCP by default.

Since "Debian stretch" there are helper scripts called lxc-net that allow you to set up a simple bridge for your containers, providing a DHCP-and-NATed IPv4 network. IPv6 support currently (in Stretch, TODO: what about buster?) requires manual configuration.

For a complete manual setup without the convenience of lxc-net, see the networking section below.

Caveat on internet documentation:

There is much conflicting documentation due to differing versions. As a quick overview check out the "Networking Essentials" section below. This wiki may also be outdated.

Networking Essentials

Typically containers can be given their own hardware device (from the host, phys) or can have a virtual device (veth) that is either put directly on a bridged network device that it shares with the host (uses the same DHCP server and addresses as the host) or put on a bridged network device that is masqueraded to the outside world and that is given an internal subnet.

The first case (phys) requires a physical device and is therefore not often used. It gives the container an address on the host's subnet as the second case.

The second case (veth with host-shared bridge) turns the host's ethernet device into a bridge (makes it part of a bridge) and allows the container access to the external network allowing it to acquire a DHCP address on the same network that the host is on.

The third case (veth with independent bridge) is the use case of lxc-net (since LXC 2.0) and implies the use of a masqueraded subnet (e.g. 10.0.3.0 as is the default for lxc-net) on which the host takes address 10.0.3.1 and any container takes IPs between 10.0.3.2 and 10.0.3.255.

Further networking documentation

Host-shared bridge setup

Edit /etc/lxc/default.conf and change the following lines to enable networking for all containers:

Create the network bridge:

Destroy any existing containers and create them again.

Independent bridge setup

These are system-wide changes executed prior to creating your container:

  1. Create /etc/default/lxc-net with the following line:

    • USE_LXC_BRIDGE="true"

      This will source /usr/lib/x86_64-linux-gnu/lxc/lxc-net, which contains a default networking configuration that will assign your bridge the subnet 10.0.3.0/24. You can change these values if you want in the /etc/default/lxc-net file.

      There is an Ubuntu patch to /usr/lib/<architecture>/lxc/lxc-net that will automatically configure /etc/default/lxc-net with a subnet between 10.0.x.0 and 10.0.3.0 that is available on your system, by default 10.0.3.0. This is done on system boot if /etc/default/lxc-net is missing. To use the feature, you must delete /etc/default/lxc-net.

      For other purposes, see /SimpleBridge#Using_lxc-net for values you can add yourself.

  2. Edit /etc/lxc/default.conf and change the default

    • lxc.network.type = empty
      to this:
      lxc.net.0.type = veth
      lxc.net.0.link = lxcbr0
      lxc.net.0.flags = up
      lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
      This will create a template for newly created containers.
  3. Run sudo service lxc-net restart.

  4. Newly created containers now have the above configuration. This means they will be using the lxcbr0 bridge created by the lxc-net service.

This bridge is where your containers attach themselves to. Your bridge is now automatically created (at boot-up) and your newly created containers are configured to use this bridge. Existing containers can be configured by using the above configuration or by editing /var/lib/lxc/<container>/config.

This is the same setup as the one on LXC/SimpleBridge that contains some good default values for this kind of setup. The Host device as bridge section contains an alternate setup which employs a bridge created out of the main network device of the host system, as detailed above. This is not something that can be created by lxc-net but it is something you could use if you do not want Masquerading to take place and you want your containers to be on the external network.

Creating containers

Privileged Vs. Unprivileged Containers

LXC supports two types of containers: privileged and unprivileged. Upstream explains:

Enabling the creation of the recommended unprivileged containers requires some preliminary manual configuration, as explained below. (The following is taken from various version of README.Debian in lxc; see there for more information, and see 925899 for some of the background and technical details.) See also the section below titled "Unprivileged Containers" for additional important information.

Configuration Necessary For Unprivileged Containers

Enable Unprivileged User Namespaces

Default Debian kernels since 5.10+ have unprivileged user namespaces enabled. To check, run this command:

  # sysctl kernel.unprivileged_userns_clone
  kernel.unprivileged_userns_clone = 1

If it reports 0 instead 1, it's disabled. To enable it, append kernel.unprivileged_userns_clone=1 to /etc/sysctl.conf, or to a file such as /etc/sysctl.d/unpriv-usernd.conf, then run sysctl -p.

Configure AppArmor

In .config/lxc/default.conf, set one of the following:

This step can also be done in the newly created container's configuration (the setting in .config/lxc/default.conf will only work for subsequently created containers).

Due to a bug in AppArmor parser, systemd units or whole container may fail in the case of the lxc-container-default-cgns profile. See /SystemdMountsAndAppArmor for workarounds.

Networking

From README.Debian: The easiest way to setup networking is to use lxc-net, which is enabled by default for containers started by root. For non-root unprivileged containers, you need to allow your non-root user to create virtual network interfaces with:

  # echo myusername veth lxcbr0 10 >> /etc/lxc/lxc-usernet

.

Container Creation

In this step your container is downloaded using debootstrap and a minimum Debian system is installed on your rootfs location (/var/lib/lxc/<container>/rootfs). After this, your container is ready to be run, it is already completed.

Rootfs location - along with many other settings - can be configured per container (after container is created) if required.

lxc-create -n <name> -t debian -- -r stretch

The -r stands for "release". You can also install other releases. The -r is a parameter that is passed to Debian's LXC script (template). It causes Stretch to be downloaded as the minimum "debootstrap" Debian system.

<name> is the name you give your container. It can be anything, as you like.

Alternatively you can specify the language (locale) as required(?) and additionally you can specify the mirror to use for debootstrap in this way:

LANG=C SUITE=stretch MIRROR=http://httpredir.debian.org/debian lxc-create -n debian9 -t debian

This also passes "stretch" as an environment variable instead of as a parameter to the script (template). Scripts and templates are found in /usr/share/lxc/templates/.

External mounts inside the container

By default only the container's filesystem is mounted inside the container (even if on the host, /var/lib/lxc/mycontainer/rootfs has other mount points).

To mount another filesystem in the container, add to /var/lib/lxc/mycontainer/config:

lxc.mount.entry=/path/in/host/mount_point mount_point_in_container none bind 0 0

Another bind mount example:

# Exposes /dev/sde in the container
lxc.mount.entry = /dev/sde dev/sde none bind,optional,create=file

To mount in another filesystem (for example LVM) to a container mount point

lxc.mount.entry = /dev/mapper/lvmfs-home-partition home ext4 defaults 0 2

NOTE that it is critical to have no leading "/" in the container mount point (making it a relative mount point).

Mounts in unprivileged containers

When a container is unprivileged, UID or GID of a mounted device has to be root in the container. To make sure this, do chgrp 100000 /dev/nvidiactl etc. on the host (assuming GID 100000 is container's root group).

On the other hand, when host's /home is mounted in an unprivileged container by

lxc.mount.entry = /home home none bind,rw 0 0

its UID/GID cannot be altered. To enable UID 1000 in an unprivileged container to access files of UID 1000 in /home on the host, we have to adjust UID/GID mapping between the host and the container as follows:

# Container's UID/GID 0-65535 are mapped to host's 100000-165535,
# but UID/GID 1000 on the container is mapped to host's UID/GID 1000.
lxc.idmap = u 0 100000 1000
lxc.idmap = g 0 100000 1000
lxc.idmap = u 1000 1000 1
lxc.idmap = g 1000 1000 1
lxc.idmap = u 1001 101001 64535
lxc.idmap = g 1001 101001 64535

Common hints

root passwords

In LXC releases from 2.0.8 onward no root passwords are set by default.

If you need to set the password of a container (because you forgot the random one, or want to adjust the default), you can do so with lxc-attach -n <container> passwd.

Caveats

Start and stop containers

Notes/warnings on starting and stopping containers:

Actual commands:

Command Line Access To Containers

There are two main methods to get command line access to containers:

`lxc-attach`

lxc-attach -n my-container is the simplest method to get command line access to a container. One complication is that getting the environment configured sanely can be tricky. lxc-attach has two mutually exclusive options: --keep-env and --clear-env. The former keeps the current environment for attached programs, while the latter clears the environment before attaching, so no undesired environment variables leak into the container (see `man lxc-attach` for more information). The former is the current default behavior, "but is is likely to change in the future, since this may leak undesirable information into the container." In addition to leaking undesirable information, keeping the current environment variables can also result in a broken environment. For example, if a non-root user starts an unprivileged container with --keep-env, $HOME inside the container will remain set to the user's home directory on the host - which will not even exist in the container.

Running scripts designed for normal environments in an lxc-attach session can thus be tricky. For example, the pi-hole basic installation script will fail in a session with --keep-env (the default), since it will try to access $HOME and fail, since this will not exist, as above. On the other hand, the installation script will also fail in a session with --clear-env, with the error TERM environment variable needs set. A solution in this case is to run a session with something like the following: lxc-attach --clear-env --keep-var TERM.

ssh

A more standardized method to get command line access to containers, which may avoid the above complications with the environment, is via ssh. In at least some templates (including Debian ones), ssh access is not configured by default, but setting it up is relatively simple. Here are instructions for Debian templates:

You should now be able to ssh in to root@ip_address_of_the_container.

Preparing host system WITHOUT Systemd for running LXC

You can skip this section if you are running systemd (default). Your host(system) is already prepared.

Systems running sysv need to be primed for the running of LXC on it as it required cgroups to be mounted (among other things, perhaps).

The best solution is to install package cgroupfs-mount.

Note

At least on Debian 10 (buster), lxc-checkconfig still complains that Cgroup v1 systemd controller: missing and containers with systemd refuse to start.

According to Gentoo wiki, if you want to run containers with systemd, the host also needs a name=systemd cgroup hierarchy mounted: mkdir -p /sys/fs/cgroup/systemd; mount -t cgroup -o none,name=systemd systemd /sys/fs/cgroup/systemd.

With that, lxc-checkconfig no longer complains and containers with systemd are able to start.

This was reported to package cgroupfs-mount in bug #939435 (patch included).

If package cgroupfs-mount is not available, add this line to /etc/fstab. (This is not necessary if libvirt-bin is installed as init.d/libvirt-bin will mount /sys/fs/cgroup automatically).

cgroup  /sys/fs/cgroup  cgroup  defaults  0   0

Try to mount it (a reboot solves an eventual "resource busy problem" in any case)

mount /sys/fs/cgroup

Check kernel configuration :

# lxc-checkconfig
Kernel config /proc/config.gz not found, looking in other places...
Found kernel config file /boot/config-2.6.32-5-amd64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup namespace: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: missing
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

Above the lxc-checkconfig program is reporting "Cgroup memory controller: missing". If you want memory control via cgroups then you need to recompile the linux kernel. In order to avoid problems when using memory limit settings during startup of a container, you must add cgroup_enable=memory to the kernel command line (Jessie or later). This applies even if Cgroup memory controller reports "enabled"

/!\ Be aware mounting cgroup from /etc/fstab has side-effects, like being unable to edit network manager connections.

Support

References

Debian-specific information

See also :


CategorySoftware | CategoryVirtualization | CategorySystemAdministration