Linux Containers (LXC) provide a Free Software virtualization system for computers running GNU/Linux. This is accomplished through kernel level isolation using cgroups (control groups) and namespaces. It allows one to run multiple virtual units simultaneously. Those units are similar to chroot but, isolated and utilize available resources efficiently, as they run on the same kernel.
Full support for LXC (including userspace tools) is available since the Debian 6.0 "Squeeze" release, the newer LXD tooling packaging is still a work in progress.
You can also read some sub pages :
|/CGroupV2 /CGroupV2/Discussion /Discussion /JessieSpecific /LibVirtDefaultNetwork /SimpleBridge /Squeeze-Backport /UpgradingHostFromScratchToBuster /VlanNetworking|
- Using this document
- Supported versions of LXC
- Creating containers
- External mounts inside the container
- Common hints
- Start and stop containers
- Command Line Access To Containers
- Content accuracy warning
- Unprivileged containers
- Preparing host system WITHOUT Systemd for running LXC
- Incompatibility with systemd
- Debian-specific information
Using this document
When looking for documentation, howtos and tutorials, please check which LXC version they apply to, as things might have changed. The 2.1 release, for example, changes the configuration file structure in several ways.
The rest of this page will describe LXC in the currently stable Debian release. Other Debian/LXC releases are documented in subpages of this document (see the top of this page). The work to move information about other than stable releases off into its own subpages is in progress.
Supported versions of LXC
LXC (upstream) has the following releases:
in Debian release
June 1st 2019
June 1st 2021
Stretch and Jessie Backports
apt install lxc
apt install libvirt-bin
If you want LXC to run unprivileged container(s), the package requirements are slightly different.
apt-get install lxc libvirt0 libpam-cgfs bridge-utils uidmap
See the upstream documentation for information about the various types of networking available in LXC.
Debian's packages do not ship any default network setup for containers:
$ head -n 1 /etc/lxc/default.conf lxc.network.type = empty
If you want to have network in your containers (and you usually do), you will have to either change the global default or configure each individual container. You will probably also have to setup a bridge, firewall and maybe DHCP (see below for details how to do this).
Please note that most container templates configure the first interface to use DHCP by default.
Since "Debian stretch" there are helper scripts called lxc-net that allow you to set up a simple bridge for your containers, providing a DHCP-and-NATed IPv4 network. IPv6 support currently (in Stretch, TODO: what about buster?) requires manual configuration.
For a complete manual setup without the convenience of lxc-net, see the networking section below.
Caveat on internet documentation:
There is much conflicting documentation due to differing versions. As a quick overview check out the "Networking Essentials" section below. This wiki may also be outdated.
Typically containers can be given their own hardware device (from the host, phys) or can have a virtual device (veth) that is either put directly on a bridged network device that it shares with the host (uses the same DHCP server and addresses as the host) or put on a bridged network device that is masqueraded to the outside world and that is given an internal subnet.
The first case (phys) requires a physical device and is therefore not often used. It gives the container an address on the host's subnet as the second case.
The second case (veth with host-shared bridge) turns the host's ethernet device into a bridge (makes it part of a bridge) and allows the container access to the external network allowing it to acquire a DHCP address on the same network that the host is on.
The third case (veth with independent bridge) is the use case of lxc-net (since LXC 2.0) and implies the use of a masqueraded subnet (e.g. 10.0.3.0 as is the default for lxc-net) on which the host takes address 10.0.3.1 and any container takes IPs between 10.0.3.2 and 10.0.3.255.
Further networking documentation
SimpleBridge explains both the host-shared bridge and the independent bridge (natted/routed).
VlanNetworking describes a VLAN + bridge setup.
LibVirtDefaultNetwork is said to provide for easy network setup. Using the libvirt package (old).
Host-shared bridge setup
Edit /etc/lxc/default.conf and change the following lines to enable networking for all containers:
lxc.net.0.type = veth lxc.net.0.link = virbr0 lxc.net.0.flags = up # you can leave these lines as they were: lxc.apparmor.profile = generated lxc.apparmor.allow_nesting = 1
Create the network bridge:
$ sudo apt-get install -y libvirt-clients libvirt-daemon-system iptables ebtables dnsmasq-base libxml2-utils iproute2 $ sudo virsh net-start default $ sudo virsh net-autostart default
libvirt-daemon-system contains a default bridge configuration.
If you do not want to use all the given defaults then ommit the -y flag above
you may want to consider using the --no-install-recommends flag, since libvirt-daemon will pull in the qemu package, which in turn will pull in a lot of GUI stuff which you don't need for LXC.
Destroy any existing containers and create them again.
Independent bridge setup
These are system-wide changes executed prior to creating your container:
Create /etc/default/lxc-net with the following line:
This will source /usr/lib/x86_64-linux-gnu/lxc/lxc-net, which contains a default networking configuration that will assign your bridge the subnet 10.0.3.0/24. You can change these values if you want in the /etc/default/lxc-net file.
There is an Ubuntu patch to /usr/lib/<architecture>/lxc/lxc-net that will automatically configure /etc/default/lxc-net with a subnet between 10.0.x.0 and 10.0.3.0 that is available on your system, by default 10.0.3.0. This is done on system boot if /etc/default/lxc-net is missing. To use the feature, you must delete /etc/default/lxc-net.
For other purposes, see /SimpleBridge#Using_lxc-net for values you can add yourself.
Edit /etc/lxc/default.conf and change the default
lxc.network.type = emptyto this:
lxc.net.0.type = veth lxc.net.0.link = lxcbr0 lxc.net.0.flags = up lxc.net.0.hwaddr = 00:16:3e:xx:xx:xxThis will create a template for newly created containers.
Run sudo service lxc-net restart.
Newly created containers now have the above configuration. This means they will be using the lxcbr0 bridge created by the lxc-net service.
This bridge is where your containers attach themselves to. Your bridge is now automatically created (at boot-up) and your newly created containers are configured to use this bridge. Existing containers can be configured by using the above configuration or by editing /var/lib/lxc/<container>/config.
This is the same setup as the one on LXC/SimpleBridge that contains some good default values for this kind of setup. The Host device as bridge section contains an alternate setup which employs a bridge created out of the main network device of the host system, as detailed above. This is not something that can be created by lxc-net but it is something you could use if you do not want Masquerading to take place and you want your containers to be on the external network.
Privileged Vs. Unprivileged Containers
LXC supports two types of containers: privileged and unprivileged. Upstream explains:
- LXC containers can be of two kinds:
- Privileged containers
- Unprivileged containers
Enabling the creation of the recommended unprivileged containers requires some preliminary manual configuration, as explained below. (The following is taken from various version of README.Debian in lxc; see there for more information, and see 925899 for some of the background and technical details.) See also the section below titled "Unprivileged Containers" for additional important information.
Configuration Necessary For Unprivileged Containers
Enable Unprivileged User Namespaces
Default Debian kernels since 5.10+ have unprivileged user namespaces enabled. To check, run this command:
# sysctl kernel.unprivileged_userns_clone kernel.unprivileged_userns_clone = 1
If it reports 0 instead 1, it's disabled. To enable it, append kernel.unprivileged_userns_clone=1 to /etc/sysctl.conf, or to a file such as /etc/sysctl.d/unpriv-usernd.conf, then run sysctl -p.
In .config/lxc/default.conf, set one of the following:
lxc.apparmor.profile = unconfined
lxc.apparmor.profile = lxc-container-default-cgns
This step can also be done in the newly created container's configuration (the setting in .config/lxc/default.conf will only work for subsequently created containers).
From README.Debian: The easiest way to setup networking is to use lxc-net, which is enabled by default for containers started by root. For non-root unprivileged containers, you need to allow your non-root user to create virtual network interfaces with:
# echo myusername veth lxcbr0 10 >> /etc/lxc/lxc-usernet
In this step your container is downloaded using debootstrap and a minimum Debian system is installed on your rootfs location (/var/lib/lxc/<container>/rootfs). After this, your container is ready to be run, it is already completed.
Rootfs location - along with many other settings - can be configured per container (after container is created) if required.
lxc-create -n <name> -t debian -- -r stretch
The -r stands for "release". You can also install other releases. The -r is a parameter that is passed to Debian's LXC script (template). It causes Stretch to be downloaded as the minimum "debootstrap" Debian system.
<name> is the name you give your container. It can be anything, as you like.
Alternatively you can specify the language (locale) as required(?) and additionally you can specify the mirror to use for debootstrap in this way:
LANG=C SUITE=stretch MIRROR=http://httpredir.debian.org/debian lxc-create -n debian9 -t debian
This also passes "stretch" as an environment variable instead of as a parameter to the script (template). Scripts and templates are found in /usr/share/lxc/templates/.
External mounts inside the container
By default only the container's filesystem is mounted inside the container (even if on the host, /var/lib/lxc/mycontainer/rootfs has other mount points).
To mount another filesystem in the container, add to /var/lib/lxc/mycontainer/config:
lxc.mount.entry=/path/in/host/mount_point mount_point_in_container none bind 0 0
Another bind mount example:
# Exposes /dev/sde in the container lxc.mount.entry = /dev/sde dev/sde none bind,optional,create=file
To mount in another filesystem (for example LVM) to a container mount point
lxc.mount.entry = /dev/mapper/lvmfs-home-partition home ext4 defaults 0 2
NOTE that it is critical to have no leading "/" in the container mount point (making it a relative mount point).
Mounts in unprivileged containers
When a container is unprivileged, UID or GID of a mounted device has to be root in the container. To make sure this, do chgrp 100000 /dev/nvidiactl etc. on the host (assuming GID 100000 is container's root group).
On the other hand, when host's /home is mounted in an unprivileged container by
lxc.mount.entry = /home home none bind,rw 0 0
its UID/GID cannot be altered. To enable UID 1000 in an unprivileged container to access files of UID 1000 in /home on the host, we have to adjust UID/GID mapping between the host and the container as follows:
# Container's UID/GID 0-65535 are mapped to host's 100000-165535, # but UID/GID 1000 on the container is mapped to host's UID/GID 1000. lxc.idmap = u 0 100000 1000 lxc.idmap = g 0 100000 1000 lxc.idmap = u 1000 1000 1 lxc.idmap = g 1000 1000 1 lxc.idmap = u 1001 101001 64535 lxc.idmap = g 1001 101001 64535
In LXC releases prior to 2.0.8 containers might be created with a random root password, a static password or without a password at all.
From 2.0.8 onward no root passwords are set by default.
If you need to set the password of a container (because you forgot the random one, or want to adjust the default), you can do so with lxc-attach -n <container> passwd.
containers not running with full device permissions (which is the default, restricted) spew out systemd errors in the container as systemd tries to set all devices as available (or even not-available); these messages can be turned off by setting /etc/systemd/journald.conf to "?MaxLevelStore=6", if that doesn't work the cgroup also needs to be auto-mounted as "ro" (instead of "mixed") (needs confirmation).
lxc-checkconfig Multiple /dev/pts instances: missing? Don't be alarmed: https://edmondscommerce.github.io/fedora-24-lxc-multiple-/dev/pts-instances-missing/
- defaults.conf has no mechanism to substitute the hostname in to configuration files. That means while networking can have automatically assigned values for hwaddr its impossible to for default.conf to express "containers should be logged to /var/log/lxc-container-$HOSTNAME.log".
Use of lxc on Debian hosts in the unified CGroup hierarchy (pure CGroup V2 hierarchy) is explained in CGroupV2.
Start and stop containers
Notes/warnings on starting and stopping containers:
When you connect to a container console (via lxc-console), lxc will let you know how to quit it. The first time you log in however, getty may clear the screen, so you'll probably miss this bit of information:
Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself
If you're using screen and also use the Ctrl+a command prefix, type <Ctrl+a a q> to exit the console.
When you start the container in foreground mode (with -F), there's apparently no way to quit the terminal (<Ctrl+a q> doesn't work). Make sure you start the containers in background mode (the default), unless you need to debug why a container didn't start.
To start a container in the background and attach to the console at any time later:
lxc-start -n myvm lxc-console -n myvm
To start a container in foreground mode and stay attached to the console (see warning above):
lxc-start -F -n myvm
To stop a container without proper halt inside the container:
lxc-stop -k -n myvm
To have containers automatically started on booting the host, edit their config file and add:
lxc.start.auto = 1
If your container is defined in a non-default path (e.g. you used the -P option to lxc-create), you must symlink their config file to /etc/lxc/auto/:
ln -s /var/lib/lxc/mycontainer/config /etc/lxc/auto/mycontainer
Command Line Access To Containers
There are two main methods to get command line access to containers:
lxc-attach -n my-container is the simplest method to get command line access to a container. One complication is that getting the environment configured sanely can be tricky. lxc-attach has two mutually exclusive options: --keep-env and --clear-env. The former keeps the current environment for attached programs, while the latter clears the environment before attaching, so no undesired environment variables leak into the container (see `man lxc-attach` for more information). The former is the current default behavior, "but is is likely to change in the future, since this may leak undesirable information into the container." In addition to leaking undesirable information, keeping the current environment variables can also result in a broken environment. For example, if a non-root user starts an unprivileged container with --keep-env, $HOME inside the container will remain set to the user's home directory on the host - which will not even exist in the container.
Running scripts designed for normal environments in an lxc-attach session can thus be tricky. For example, the pi-hole basic installation script will fail in a session with --keep-env (the default), since it will try to access $HOME and fail, since this will not exist, as above. On the other hand, the installation script will also fail in a session with --clear-env, with the error TERM environment variable needs set. A solution in this case is to run a session with something like the following: lxc-attach --clear-env --keep-var TERM.
A more standardized method to get command line access to containers, which may avoid the above complications with the environment, is via ssh. In at least some templates (including Debian ones), ssh access is not configured by default, but setting it up is relatively simple. Here are instructions for Debian templates:
- Attach to the container, and run:
apt install openssh-server
chmod 700 /root/.ssh
chmod 600 /root/.ssh/authorized_keys
On your regular system, open your public key file (e.g. $HOME/.ssh/id_rsa.pub) in a text editor and copy the key to the clipboard.
Inside the container, open /root/.ssh/authorized_keys in a text editor (vi is installed in Debian template installations), and paste in the key from the clipboard.
You should now be able to ssh in to root@ip_address_of_the_container.
Content accuracy warning
Caveat Emptor: The rest of this page is useful but definitely outdated if you are trying to get LXC to work on Debian “stretch” or “sid” (unstable). Proceed with caution.
1. As upstream explains, "most distribution templates simply won't work" with the limitations imposed by unprivileged containers. The download template should be used instead.
2. Unprivileged containers cannot use mknod to create a block or character device, nor do loop mounts or mount partition (again, see upstream's explanation). (This is a bit inaccurate. A device special file on a host can be accessed by unprivileged containers if their UID or GID of the device file is the same as container's root and the device file is bind-mounted. One can use NVIDIA GPU from unprivileged containers.)
The containers keep the same "default" right as the normal user (because of the "normal uid" shared with the user). In fact, LXC unprivileged containers fake some parts with subuids and subgids, and others, like create devices, etc… are "bypassed" during the installation process of these "tweaked" templates.
3. Keep in mind that you will have to write down your own subuids and subgids
Jessie / Debian 8
See LXC/JessieSpecific for information on setting up Unprivileged containers on Jessie.
How to setup LXC v2 Unprivileged container on Debian 9
Check the system
Everything should be stated as "enable" in green color. If not, try to reboot the system.
Configuration of the host system
Enter this in a terminal:
sudo sh -c 'echo "kernel.unprivileged_userns_clone=1" > /etc/sysctl.d/80-lxc-userns.conf'
Then reload sysctl without rebooting the system:
sudo sysctl --system
Now, get the subuids and subgids of the current user:
cat /etc/s*id|grep $USER
You should get something like this, where 1258512 is your subuids and subgids,
Here debian is the user name.
If the previous command do not return anything, it means that you do not have any subuids and subgids attributed.
You will have get any subuids and subgids with usermod command. Once done, add 65536 to this number, and enter these commands in the terminal.
* sudo usermod --add-subuids 1258512-1324047 $USER * sudo usermod --add-subgids 1258512-1324047 $USER
65536 uids and gids ranges are fine in most case; and enough to share this same range with "all" your containers.
Last thing before the network configuration.
In some configuration (not needed one barebone Sid install, nor in KVM and Virtualbox vm under Jessie, but needed in barebone Jessie install), lxc unprivileged container will complain that it cannot run, and will ask to add +x right to home user folder; .local and .local/share.
Permission denied - Could not access /home/$USER/.local. Please grant it x access, or add an ACL for the container root.
In that case, you will have to enter this in a terminal:
sudo setfacl -m u:1258512:x . .local .local/share
After these preparations, lxc-usernsexec should give # prompt with no error.
Configuration of the network
1. Configure a bridge in the host.
2. Configure the virtual network interfaces:
echo "$USER veth lxcbr0 10"| sudo tee -i /etc/lxc/lxc-usernet
Choose the good naming here of the bridge interface. If you setup a bridge named br0, you should replace lxcbr0 by br0.
The configuration of LXC
1. First, you have to create the lxc folder:
mkdir -p .config/lxc
2. Then, configure the default template for all future lxc unprivileged containers.
echo \ 'lxc.include = /etc/lxc/default.conf # Subuids and subgids mapping lxc.idmap = u 0 1258512 65536 lxc.idmap = g 0 1258512 65536 # "Secure" mounting lxc.mount.auto = proc:mixed sys:ro cgroup:mixed # Network configuration lxc.network.type = veth lxc.network.link = lxcbr0 lxc.network.flags = up lxc.network.hwaddr = 00:FF:xx:xx:xx:xx # Disable AppArmor confinement for containers started by non-root # See https://discuss.linuxcontainers.org/t/unprivileged-container-wont-start-cgroups-sysvinit/6766 and # https://discuss.linuxcontainers.org/t/cannot-use-generated-profile-apparmor-parser-not-available/4449 lxc.apparmor.profile = unconfined # Unprivileged containers started by ROOT can use lxc.apparmor.profile = generated '>.config/lxc/default.conf
Bad settings with lxc.mount.auto option can lead to security risk and data loss!
Create lxc unprivileged containers
Launch lxc-create in normal user:
lxc-create --name ubuntu -t download
I recommend to choose these options (Debian versions tested are too slow):
1. Distribution: ubuntu 2. Release: xenial 3. Architecture: amd64
Here an example:
Setting up the GPG keyring Downloading the image index --- DIST RELEASE ARCH VARIANT BUILD --- alpine 3.1 amd64 default 20161124_17:50 alpine 3.1 armhf default 20161124_17:50 alpine 3.1 i386 default 20161124_17:50 alpine 3.2 amd64 default 20161124_17:50 alpine 3.2 armhf default 20161124_17:50 alpine 3.2 i386 default 20161124_19:28 alpine 3.3 amd64 default 20161124_17:50 alpine 3.3 armhf default 20161124_17:50 alpine 3.3 i386 default 20161124_17:50 alpine 3.4 amd64 default 20161124_17:50 alpine 3.4 armhf default 20161124_17:50 alpine 3.4 i386 default 20161124_19:28 alpine edge amd64 default 20161124_19:28 alpine edge armhf default 20161124_17:50 alpine edge i386 default 20161124_17:50 archlinux current amd64 default 20161124_01:27 archlinux current i386 default 20161124_01:27 centos 6 amd64 default 20161124_02:16 centos 6 i386 default 20161124_02:16 centos 7 amd64 default 20161124_02:16 debian jessie amd64 default 20161123_22:42 debian jessie arm64 default 20161123_22:42 debian jessie armel default 20161123_22:42 debian jessie armhf default 20161123_22:42 debian jessie i386 default 20161123_22:42 debian jessie powerpc default 20161123_22:42 debian jessie ppc64el default 20161123_22:42 debian jessie s390x default 20161123_22:42 debian sid amd64 default 20161123_22:42 debian sid arm64 default 20161123_22:42 debian sid armel default 20161123_22:42 debian sid armhf default 20161123_22:42 debian sid i386 default 20161123_22:42 debian sid powerpc default 20161123_22:42 debian sid ppc64el default 20161123_22:42 debian sid s390x default 20161123_22:42 debian stretch amd64 default 20161123_22:42 debian stretch arm64 default 20161123_22:42 debian stretch armel default 20161123_22:42 debian stretch armhf default 20161123_22:42 debian stretch i386 default 20161123_22:42 debian stretch powerpc default 20161104_22:42 debian stretch ppc64el default 20161123_22:42 debian stretch s390x default 20161123_22:42 debian wheezy amd64 default 20161123_22:42 debian wheezy armel default 20161123_22:42 debian wheezy armhf default 20161123_22:42 debian wheezy i386 default 20161123_22:42 debian wheezy powerpc default 20161123_22:42 debian wheezy s390x default 20161123_22:42 fedora 22 amd64 default 20161124_01:27 fedora 22 i386 default 20161124_01:27 fedora 23 amd64 default 20161123_01:27 fedora 23 i386 default 20161123_01:27 fedora 24 amd64 default 20161124_01:27 fedora 24 i386 default 20161123_01:27 gentoo current amd64 default 20161124_14:12 gentoo current i386 default 20161124_14:12 opensuse 13.2 amd64 default 20161124_00:53 oracle 6 amd64 default 20161124_11:40 oracle 6 i386 default 20161124_11:40 oracle 7 amd64 default 20161124_11:40 plamo 5.x amd64 default 20161123_21:36 plamo 5.x i386 default 20161123_21:36 plamo 6.x amd64 default 20161123_21:36 plamo 6.x i386 default 20161123_21:36 ubuntu precise amd64 default 20161124_03:49 ubuntu precise armel default 20161124_03:49 ubuntu precise armhf default 20161124_03:49 ubuntu precise i386 default 20161124_03:49 ubuntu precise powerpc default 20161124_03:49 ubuntu trusty amd64 default 20161124_03:49 ubuntu trusty arm64 default 20161124_03:49 ubuntu trusty armhf default 20161124_03:49 ubuntu trusty i386 default 20161124_03:49 ubuntu trusty powerpc default 20161124_03:49 ubuntu trusty ppc64el default 20161124_03:49 ubuntu xenial amd64 default 20161124_03:49 ubuntu xenial arm64 default 20161124_03:49 ubuntu xenial armhf default 20161124_03:49 ubuntu xenial i386 default 20161124_03:49 ubuntu xenial powerpc default 20161124_03:49 ubuntu xenial ppc64el default 20161124_03:49 ubuntu xenial s390x default 20161124_03:49 ubuntu yakkety amd64 default 20161124_03:49 ubuntu yakkety arm64 default 20161124_03:49 ubuntu yakkety armhf default 20161124_03:49 ubuntu yakkety i386 default 20161124_03:49 ubuntu yakkety powerpc default 20161124_03:49 ubuntu yakkety ppc64el default 20161124_03:49 ubuntu yakkety s390x default 20161124_03:49 ubuntu zesty amd64 default 20161124_03:49 ubuntu zesty arm64 default 20161124_03:49 ubuntu zesty armhf default 20161124_03:49 ubuntu zesty i386 default 20161124_03:49 ubuntu zesty powerpc default 20161124_03:49 ubuntu zesty ppc64el default 20161124_03:49 ubuntu zesty s390x default 20161124_03:49 --- Distribution: ubuntu Release: xenial Architecture: amd64 Downloading the image index Downloading the rootfs Downloading the metadata The image cache is now ready Unpacking the rootfs --- You just created an Ubuntu container (release=xenial, arch=amd64, variant=default) To enable sshd, run: apt-get install openssh-server For security reason, container images ship without user accounts and without a root password. Use lxc-attach or chroot directly into the rootfs to set a root password or create user accounts.
Now you have succeeded to create an Ubuntu unprivileged container.
Configuration of the unprivileged container
Now every changes in the configuration have to be made where the unprivileged container reside. So in this example, the container is located:
Check man lxc.container.conf to get more options.
Start of the unprivileged container
Enter the following command:
lxc-start --name ubuntu --logfile $HOME/lxc_ubuntu.log --logpriority DEBUG
As it, if things go wrong, you will be able to track them in the log file.
Now you can connect to the container:
lxc-attach --name ubuntu
Unprivileged Debian container by mmdebstrap --mode=unshare
Without using lxc-create -t download, we can also create an unprivileged Debian container (named mmdebstrap-unpriv) by the following steps by a non-root user:
$ cd .local/share/lxc $ mkdir mmdebstrap-unpriv $ btrfs subvolume create mmdebstrap-unpriv/rootfs # If you use btrfs home directory $ mmdebstrap --mode=unshare --components="main contrib non-free" --variant=important buster mmdebstrap-unpriv/rootfs # --variant=important can be replaced by --variant=minbase --include=systemd-sysv # In the container you should change /etc/hostname
lxc-start -F -n mmdebstrap-unpriv with the following container config ($HOME/.local/share/lxc/mmdebstrap-unpriv/config) seems working fine.
# Uncomment the following line to support nesting containers: #lxc.include = /usr/share/lxc/config/nesting.conf # (Be aware this has security implications) # Distribution configuration lxc.include = /usr/share/lxc/config/common.conf lxc.include = /usr/share/lxc/config/userns.conf lxc.arch = linux64 # The following lines are copied from the "download" template of unprivileged Ubuntu. lxc.mount.entry = /sys/kernel/debug sys/kernel/debug none bind,optional 0 0 lxc.mount.entry = /sys/kernel/security sys/kernel/security none bind,optional 0 0 lxc.mount.entry = /sys/fs/pstore sys/fs/pstore none bind,optional 0 0 lxc.mount.entry = mqueue dev/mqueue mqueue rw,relatime,create=dir,optional 0 0 lxc.mount.entry = /sys/firmware/efi/efivars sys/firmware/efi/efivars none bind,optional 0 0 lxc.mount.entry = /proc/sys/fs/binfmt_misc proc/sys/fs/binfmt_misc none bind,optional 0 0 # Unprivileged container cannot use the "generated" AppArmor profile. lxc.apparmor.profile = unconfined lxc.apparmor.allow_nesting = 1 # The following two lines have to be compatible with /etc/subuid and /etc/subgid lxc.idmap = u 0 100000 65536 lxc.idmap = g 0 100000 65536 lxc.rootfs.path = btrfs:/home/your-user-name/.local/share/lxc/mmdebstrap-unpriv/rootfs lxc.uts.name = mmdebstrap-unpriv # Network configuration lxc.net.0.type = empty
We see the following error
systemd-journald-audit.socket: Failed to create listening socket (audit 1): Operation not permitted systemd-journald-audit.socket: Failed to listen on sockets: Operation not permitted systemd-journald-audit.socket: Failed with result 'resources'. [FAILED] Failed to listen on Journal Audit Socket. See 'systemctl status systemd-journald-audit.socket' for details.
The above error is reported as 959921
Preparing host system WITHOUT Systemd for running LXC
You can skip this section if you are running systemd (default). Your host(system) is already prepared.
Systems running sysv need to be primed for the running of LXC on it as it required cgroups to be mounted (among other things, perhaps).
The best solution is to install package cgroupfs-mount.
At least on Debian 10 (buster), lxc-checkconfig still complains that Cgroup v1 systemd controller: missing and containers with systemd refuse to start.
According to Gentoo wiki, if you want to run containers with systemd, the host also needs a name=systemd cgroup hierarchy mounted: mkdir -p /sys/fs/cgroup/systemd; mount -t cgroup -o none,name=systemd systemd /sys/fs/cgroup/systemd.
With that, lxc-checkconfig no longer complains and containers with systemd are able to start.
This was reported to package cgroupfs-mount in bug #939435 (patch included).
If package cgroupfs-mount is not available, add this line to /etc/fstab. (This is not necessary if libvirt-bin is installed as init.d/libvirt-bin will mount /sys/fs/cgroup automatically).
cgroup /sys/fs/cgroup cgroup defaults 0 0
Try to mount it (a reboot solves an eventual "resource busy problem" in any case)
Check kernel configuration :
# lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-2.6.32-5-amd64 --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: missing Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig
Above the lxc-checkconfig program is reporting "Cgroup memory controller: missing". If you want memory control via cgroups then you need to recompile the linux kernel. In order to avoid problems when using memory limit settings during startup of a container, you must add cgroup_enable=memory to the kernel command line (Jessie or later). This applies even if Cgroup memory controller reports "enabled"
Be aware mounting cgroup from /etc/fstab has side-effects, like being unable to edit network manager connections.
Incompatibility with systemd
The version in Wheezy (0.8.0~rc1-8+deb7u2) is not compatible with running systemd inside the container. See 766216.
- The versions in both jessie and stretch support systemd in the container just fine for Debian guests.
- YMMV for other types of guests
if you encounter a "Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted" on a Jessie container, then you might consider this advice. The author of these lines here has not verified the accuracy of the proposed configuration settings - be it with respect to security or to other side effects, so please use your own judgement.
Upgrading container from "Wheezy" to "Jessie"
When upgrading an lxc guest running "Wheezy" to "Jessie", the lxc VM will stop working, because at the time of writing (23.11.2014) systems will automatically be migrated to systemd. See 766233. This behaviour is being reviewed in 762194.
Switch back to sysv
If the VM was migrated to systemd automatically via an upgrade then you can switch back to sysvinit:
lxc-stop -n myvm # stop the vm # or, if that doesn't work use lxc-kill # the next step requires the VM to be mounted at /var/lib/lxc/myvm/root chroot /var/lib/lxc/myvm/root # chroot into the vm apt-get install sysvinit-core # reinstall old sysvinit
Alternatively you can try to start the container in the foreground and do the same via the container's console.
Not letting your system be updated to systemd during the upgrade
Before upgrade, run:
apt-get install sysvinit-core
or run the following command in place of a usual dist-upgrade:
apt-get dist-upgrade sysvinit-core
See also :
https://blog.rot13.org/2010/03/lxc-watchdog_missing_bits_for_openvz_-_linux_containers_migration.html which describes a tool that allows controlling the guest's startup/shutdown through power signals, and also some more setup for consoles.