Differences between revisions 132 and 133
Revision 132 as of 2016-11-05 13:32:21
Size: 24016
Editor: ?HaydarRabiee
Comment:
Revision 133 as of 2016-12-01 08:54:25
Size: 34480
Editor: ?Hanafi
Comment:
Deletions are marked like this. Additions are marked like this.
Line 96: Line 96:
{{{#!wiki note
'''Note'''

1. You cannot use normal iso. The simplest thing to do is to use LXC templates, of "usual" distributions, provided by ''LXC'' maintainers.

2. Know that, by design, unprivileged containers cannot use mknod to create a block or character device, nor do loop mounts or mount partition.

The containers keep the same "default" right as the normal user (because of the "normal uid" shared with the user). In fact, LXC unprivileged containers fake some parts with subuids and subgids, and others, like create devices, etc… are "bypassed" during the installation process of these "tweaked" templates.

3. Keep in mind that you will have to write down your own subuids and subgids
}}}


==== How to setup LXC v1 unprivileged container on Debian 8 ====
Line 97: Line 112:

==== How to setup LXC 2 unprivileged container on Debian 8 ====

If you want to avoid some "manual tweaks", you can use some bpo packages. [[Backports|So you will have to setup backports ]]

==== Prepare the HOST ====

Install the latest version of the needed packages:
    {{{
1. apt-get install -t jessie-backports lxc libvirt0 linux-image-amd64

2. apt-get install libpam-cgroup libpam-cgfs bridge-utils
}}}


Now follow procedures for Debian 9, start at "Check the system".

==== How to setup LXC v2 Unprivileged container on Debian 9 ====


==== Prepare the HOST ====

Install the latest version of the needed packages:

    {{{

apt-get install lxc libvirt0 libpam-cgroup libpam-cgfs bridge-utils
}}}


==== Check the system ====

Now, try:

    {{{

lxc-checkconfig
}}}

Everything should be stated as "enable" in green color. If not, try to reboot the sytem.

==== Configuration of the host system ====

Enter this in a terminal:
    {{{

sudo echo "kernel.unprivileged_userns_clone=1" > /etc/sysctl.d/80-lxc-userns.conf
}}}

Then reload sysctl without rebooting the system:

    {{{

sudo sysctl --system
}}}

Now, get the subuids and subgids of the current user:
    {{{
cat /etc/s*id|grep $USER
}}}

You should get something like this, where 1258512 is your subuids and subgids,

    {{{
debian:1258512:65536
debian:1258512:65536
}}}

{{{#!wiki note
'''Note'''

Here '''debian''' is the user name.
}}}

'''If the previous command do not return anything''', it means that you do not have any subuids and subgids attributed.

You will have get any subuids and subgids with usermod command. Once done, add 65536 to this number, and enter these commands in the terminal.

    {{{
* sudo usermod --add-subuids 1258512-1324048 $USER
* sudo usermod --add-subgids 1258512-1324048 $USER
}}}

{{{#!wiki note
'''Note'''

65536 uids and gids ranges are fine in most case; and enough to share this same range with "all" your containers.
}}}

Last thing before the network configuration.

In some configuration (not needed one barebone Sid install, nor in KVM and Virtualbox vm under Jessie, but needed in barebone Jessie install), lxc unprivileged container will complain that it cannot run, and will ask to add +x right to home user folder; .local and .local/share.

    {{{

Permission denied - Could not access /home/$USER/.local. Please grant it x access, or add an ACL for the container root.

}}}

In that case, you will have to enter this in a terminal:

    {{{
sudo setfacl -m u:1258512:x . .local .local/share
}}}



==== Configuration of the network ====

1. Configure a bridge in the host.


2. Configure the virtual network interfaces:

    {{{
echo "$USER veth lxcbr0 10"| sudo tee -i /etc/lxc/lxc-usernet
}}}

{{{#!wiki tip
'''Tip'''

Choose the good naming here of the bridge interface. If you setup a bridge named br0, you should replace lxcbr0 by br0.
}}}

==== The configuration of LXC ====
1. First, you have to create the lxc folder:
    {{{
mkdir -p .config/lxc
}}}

2. Then, configure the default template for all future lxc unprivileged containers.

    {{{
echo \
'lxc.include = /etc/lxc/default.conf
# Subuids and subgids mapping
lxc.id_map = u 0 1258512 65537
lxc.id_map = g 0 1258512 65537
# "Secure" mounting
lxc.mount.auto = proc:mixed sys:ro cgroup:mixed


# Network configuration
lxc.network.type = veth
lxc.network.link = lxcbr0
lxc.network.flags = up
lxc.network.hwaddr = 00:FF:xx:xx:xx:xx'>.config/lxc/default.conf
}}}



{{{#!wiki warning
'''Warning'''

Bad settings with lxc.mount.auto option can lead to security risk and data loss!
}}}


==== Create lxc unprivileged containers ====

Launch lxc-create in normal user:
    {{{
lxc-create --name ubuntu -t download

}}}

{{{#!wiki note
'''Note'''

I recommend to choose these options (Debian versions tested are too slow):
}}}
    {{{
1. Distribution: ubuntu
2. Release: xenial
3. Architecture: amd64
}}}


Here an example:
    {{{
Setting up the GPG keyring
Downloading the image index

---
DIST RELEASE ARCH VARIANT BUILD
---
alpine 3.1 amd64 default 20161124_17:50
alpine 3.1 armhf default 20161124_17:50
alpine 3.1 i386 default 20161124_17:50
alpine 3.2 amd64 default 20161124_17:50
alpine 3.2 armhf default 20161124_17:50
alpine 3.2 i386 default 20161124_19:28
alpine 3.3 amd64 default 20161124_17:50
alpine 3.3 armhf default 20161124_17:50
alpine 3.3 i386 default 20161124_17:50
alpine 3.4 amd64 default 20161124_17:50
alpine 3.4 armhf default 20161124_17:50
alpine 3.4 i386 default 20161124_19:28
alpine edge amd64 default 20161124_19:28
alpine edge armhf default 20161124_17:50
alpine edge i386 default 20161124_17:50
archlinux current amd64 default 20161124_01:27
archlinux current i386 default 20161124_01:27
centos 6 amd64 default 20161124_02:16
centos 6 i386 default 20161124_02:16
centos 7 amd64 default 20161124_02:16
debian jessie amd64 default 20161123_22:42
debian jessie arm64 default 20161123_22:42
debian jessie armel default 20161123_22:42
debian jessie armhf default 20161123_22:42
debian jessie i386 default 20161123_22:42
debian jessie powerpc default 20161123_22:42
debian jessie ppc64el default 20161123_22:42
debian jessie s390x default 20161123_22:42
debian sid amd64 default 20161123_22:42
debian sid arm64 default 20161123_22:42
debian sid armel default 20161123_22:42
debian sid armhf default 20161123_22:42
debian sid i386 default 20161123_22:42
debian sid powerpc default 20161123_22:42
debian sid ppc64el default 20161123_22:42
debian sid s390x default 20161123_22:42
debian stretch amd64 default 20161123_22:42
debian stretch arm64 default 20161123_22:42
debian stretch armel default 20161123_22:42
debian stretch armhf default 20161123_22:42
debian stretch i386 default 20161123_22:42
debian stretch powerpc default 20161104_22:42
debian stretch ppc64el default 20161123_22:42
debian stretch s390x default 20161123_22:42
debian wheezy amd64 default 20161123_22:42
debian wheezy armel default 20161123_22:42
debian wheezy armhf default 20161123_22:42
debian wheezy i386 default 20161123_22:42
debian wheezy powerpc default 20161123_22:42
debian wheezy s390x default 20161123_22:42
fedora 22 amd64 default 20161124_01:27
fedora 22 i386 default 20161124_01:27
fedora 23 amd64 default 20161123_01:27
fedora 23 i386 default 20161123_01:27
fedora 24 amd64 default 20161124_01:27
fedora 24 i386 default 20161123_01:27
gentoo current amd64 default 20161124_14:12
gentoo current i386 default 20161124_14:12
opensuse 13.2 amd64 default 20161124_00:53
oracle 6 amd64 default 20161124_11:40
oracle 6 i386 default 20161124_11:40
oracle 7 amd64 default 20161124_11:40
plamo 5.x amd64 default 20161123_21:36
plamo 5.x i386 default 20161123_21:36
plamo 6.x amd64 default 20161123_21:36
plamo 6.x i386 default 20161123_21:36
ubuntu precise amd64 default 20161124_03:49
ubuntu precise armel default 20161124_03:49
ubuntu precise armhf default 20161124_03:49
ubuntu precise i386 default 20161124_03:49
ubuntu precise powerpc default 20161124_03:49
ubuntu trusty amd64 default 20161124_03:49
ubuntu trusty arm64 default 20161124_03:49
ubuntu trusty armhf default 20161124_03:49
ubuntu trusty i386 default 20161124_03:49
ubuntu trusty powerpc default 20161124_03:49
ubuntu trusty ppc64el default 20161124_03:49
ubuntu xenial amd64 default 20161124_03:49
ubuntu xenial arm64 default 20161124_03:49
ubuntu xenial armhf default 20161124_03:49
ubuntu xenial i386 default 20161124_03:49
ubuntu xenial powerpc default 20161124_03:49
ubuntu xenial ppc64el default 20161124_03:49
ubuntu xenial s390x default 20161124_03:49
ubuntu yakkety amd64 default 20161124_03:49
ubuntu yakkety arm64 default 20161124_03:49
ubuntu yakkety armhf default 20161124_03:49
ubuntu yakkety i386 default 20161124_03:49
ubuntu yakkety powerpc default 20161124_03:49
ubuntu yakkety ppc64el default 20161124_03:49
ubuntu yakkety s390x default 20161124_03:49
ubuntu zesty amd64 default 20161124_03:49
ubuntu zesty arm64 default 20161124_03:49
ubuntu zesty armhf default 20161124_03:49
ubuntu zesty i386 default 20161124_03:49
ubuntu zesty powerpc default 20161124_03:49
ubuntu zesty ppc64el default 20161124_03:49
ubuntu zesty s390x default 20161124_03:49
---

Distribution: ubuntu
Release: xenial
Architecture: amd64

Downloading the image index
Downloading the rootfs
Downloading the metadata
The image cache is now ready
Unpacking the rootfs

---
You just created an Ubuntu container (release=xenial, arch=amd64, variant=default)

To enable sshd, run: apt-get install openssh-server

For security reason, container images ship without user accounts
and without a root password.

Use lxc-attach or chroot directly into the rootfs to set a root password
or create user accounts.
}}}

Now you have succeeded to create an Ubuntu unprivileged container.

==== Configuration of the unprivileged container ====
Now every changes in the configuration have to be made where the unprivileged container reside.
So in this example, the container is located:

    {{{
ls $HOME/.local/share/lxc/ubuntu/config

}}}

Check man lxc.container.conf to get more options.

==== Start of the unprivileged container ====

Enter the following command:
    {{{
lxc-start --name ubuntu --logfile $HOME/lxc_ubuntu.log --logpriority DEBUG
}}}

As it, if things go wrong, you will be able to track them in the log file.

Now you can connect to the container:
    {{{
lxc-attach --name ubuntu
}}}

Translation(s): English - Français

(!) Discussion


Linux Containers (LXC) provide a Free Software virtualization system for computers running GNU/Linux. This is accomplished through kernel level isolation using CGroups and namespaces. It allows one to run multiple virtual units simultaneously. Those units, similar to chroots, are isolated and utilize available resources efficiently, as they run on the same kernel.

Visit https://linuxcontainers.org/ for all related information.

Full support for LXC (including userspace tools) is available since the Debian 6.0 "Squeeze" release.

You can also read some sub pages :

Supported versions of LXC

There are two versions of the LXC userspace tools currently supported by upstream:

  • LXC 1.0 (supported until June 1st 2019)
  • LXC 2.0 (supported until June 1st 2021)

LXC 1.0 is available in Jessie. LXC 2.0 is available in Stretch and Jessie Backports.

When looking for documentation, howtos and tutorials, please check which LXC version they apply to, as things might have changed.

The rest of this page will try to distinguish between information that applies to both versions and that applies to only to one of them.

Common hints

root passwords

Some templates will create containers with a random root password, some with a static one (like "root" or "toor") and some without a password at all.

The upstream goal is to move all templates to no passwords at some point, as you do not need a password to directly attach to a container via lxc-attach -n <container> locally anyways.

If you need to set the password of a container (because you forgot the random one, or want to adjust the default), you can do so with lxc-attach -n <container> passwd.

network setup

Debian's packages do not ship any default network setup for containers (/etc/lxc/default.conf contains lxc.network.type = empty).

If you want to have network in your containers (and you usually do), you will have to either change the global default or configure each individual container. You will also probably will have to setup a bridge, firewall and maybe DHCP (see below for details how to do this).

Please note that most container templates configure the first interface to use DHCP by default.

Changes between "Jessie" and "Stretch"

As mentioned before, Stretch ships with a new major release of LXC, which also includes a helper for easier networking setup called lxc-net. lxc-net allows you to set up a simple bridge with DHCP and NAT for your containers.

LXC 2.0 also allows the use of unprivileged containers.

Networked quickstart for Debian Stretch (testing as of Q3 2016)

This information pertains also to Debian Jessie (stable) except that you will need to (manually) configure your bridge yourself.

Note: In Ubuntu (16.04, 16.10 (?)) the bridge interface lxcbr0 is automatically setup but not so in Debian (as of 1.2.0.4-1) thus requiring manual configuration. (Hence the packages lxc in Debian and in Ubuntu are not identical). Otherwise, after this manual setup, it should function identical.

For complete manual setup without the convenience of Stretch (testing), see the networking section below.

Caveat on internet documentation

There is much conflicting documentation due to differing versions. As a quick overview check out the Networking Essentials section down below. This wiki may also be outdated. When using lxc-net, only the following minimal changes are required to get a networked container up and running in Stretch:

Minimal changes to set up networking for LXC for Debian “stretch” (testing)

These are system-wide changes executed prior to creating your container.

  1. Create /etc/default/lxc-net with the following line:

    • USE_LXC_BRIDGE="true"

      Missing configuration is sourced from /usr/lib/x86_64-linux-gnu/lxc/lxc-net which contains the default values. The patch available for /usr/lib/x86_64-linux-gnu/lxc/lxc-net (or /usr/lib/<architecture>/lxc/lxc-net) at 0001-Allocate-new-lxcbr0-subnet-at-startup-time.patch found in the Ubuntu `lxc` packaging on GitHub will automate auto-configuration of this subnet (default 10.0.3.0) as given to the bridge based on the first available subnet starting from 10.0.3.0 in the 10.0.x.0 range). Delete /etc/default/lxc-net if it exists, and allow the lxc-net service to create /etc/default/lxc-net at startup. (For the patch only, otherwise keep what you just created and just let it be sourced (added to) from the default values in /usr/lib/x86_64-linux-gnu/lxc/lxc-net). See /SimpleBridge#Using_lxc-net for default values you can add yourself.

  2. Edit /etc/lxc/default.conf and change the default

    • lxc.network.type = empty
      to this:
      lxc.network.type = veth
      lxc.network.link = lxcbr0
      lxc.network.flags = up
      lxc.network.hwaddr = 00:16:3e:xx:xx:xx
      This will create a template for newly created containers.
  3. Run sudo service lxc-net restart.

  4. Newly created containers now have the above configuration. This means they will be using the lxcbr0 bridge created by the lxc-net service.

This bridge is where your containers attach themselves to. Your bridge is now automatically created (at boot-up) and your newly created containers are configured to use this bridge. Existing containers can be configured by using the above configuration for editing your /var/lib/lxc/<container>/config file.

This is the same setup as which is created in the LXC/SimpleBridge page that contains some good default values for this kind of setup. The Host device as bridge section contains an alternate setup which employs a bridge created out of the main network device of the host system, as detailed below. This is not something that can be created by lxc-net but it is something you could use if you do not want Masquerading to take place and you want your containers to be on the external network.

Unprivileged container

Note

1. You cannot use normal iso. The simplest thing to do is to use LXC templates, of "usual" distributions, provided by LXC maintainers.

2. Know that, by design, unprivileged containers cannot use mknod to create a block or character device, nor do loop mounts or mount partition.

The containers keep the same "default" right as the normal user (because of the "normal uid" shared with the user). In fact, LXC unprivileged containers fake some parts with subuids and subgids, and others, like create devices, etc… are "bypassed" during the installation process of these "tweaked" templates.

3. Keep in mind that you will have to write down your own subuids and subgids

How to setup LXC v1 unprivileged container on Debian 8

Unwritten.

How to setup LXC 2 unprivileged container on Debian 8

If you want to avoid some "manual tweaks", you can use some bpo packages. So you will have to setup backports

Prepare the HOST

Install the latest version of the needed packages:

  • 1. apt-get install -t jessie-backports  lxc libvirt0 linux-image-amd64
    
    2. apt-get install libpam-cgroup libpam-cgfs bridge-utils

Now follow procedures for Debian 9, start at "Check the system".

How to setup LXC v2 Unprivileged container on Debian 9

Prepare the HOST

Install the latest version of the needed packages:

  • apt-get install lxc libvirt0 libpam-cgroup libpam-cgfs bridge-utils

Check the system

Now, try:

  • lxc-checkconfig

Everything should be stated as "enable" in green color. If not, try to reboot the sytem.

Configuration of the host system

Enter this in a terminal:

  • sudo echo "kernel.unprivileged_userns_clone=1" > /etc/sysctl.d/80-lxc-userns.conf

Then reload sysctl without rebooting the system:

  • sudo sysctl --system

Now, get the subuids and subgids of the current user:

  • cat  /etc/s*id|grep $USER

You should get something like this, where 1258512 is your subuids and subgids,

  • debian:1258512:65536
    debian:1258512:65536

Note

Here debian is the user name.

If the previous command do not return anything, it means that you do not have any subuids and subgids attributed.

You will have get any subuids and subgids with usermod command. Once done, add 65536 to this number, and enter these commands in the terminal.

  • * sudo usermod --add-subuids 1258512-1324048 $USER
    * sudo usermod --add-subgids 1258512-1324048 $USER

Note

65536 uids and gids ranges are fine in most case; and enough to share this same range with "all" your containers.

Last thing before the network configuration.

In some configuration (not needed one barebone Sid install, nor in KVM and Virtualbox vm under Jessie, but needed in barebone Jessie install), lxc unprivileged container will complain that it cannot run, and will ask to add +x right to home user folder; .local and .local/share.

  • Permission denied - Could not access /home/$USER/.local. Please grant it x access, or add an ACL for the container root.

In that case, you will have to enter this in a terminal:

  • sudo setfacl -m u:1258512:x . .local .local/share

Configuration of the network

1. Configure a bridge in the host.

2. Configure the virtual network interfaces:

  • echo "$USER veth lxcbr0 10"| sudo tee -i /etc/lxc/lxc-usernet

Tip

Choose the good naming here of the bridge interface. If you setup a bridge named br0, you should replace lxcbr0 by br0.

The configuration of LXC

1. First, you have to create the lxc folder:

  • mkdir -p .config/lxc

2. Then, configure the default template for all future lxc unprivileged containers.

  • echo \
    'lxc.include = /etc/lxc/default.conf
    # Subuids and subgids mapping
    lxc.id_map = u 0 1258512 65537
    lxc.id_map = g 0 1258512 65537
    # "Secure" mounting
    lxc.mount.auto = proc:mixed sys:ro cgroup:mixed
    
    
    # Network configuration
    lxc.network.type = veth
    lxc.network.link = lxcbr0
    lxc.network.flags = up
    lxc.network.hwaddr = 00:FF:xx:xx:xx:xx'>.config/lxc/default.conf

Warning

Bad settings with lxc.mount.auto option can lead to security risk and data loss!

Create lxc unprivileged containers

Launch lxc-create in normal user:

  • lxc-create --name ubuntu -t download

Note

I recommend to choose these options (Debian versions tested are too slow):

  • 1. Distribution: ubuntu
    2. Release: xenial
    3. Architecture: amd64

Here an example:

  • Setting up the GPG keyring
    Downloading the image index
    
    ---
    DIST    RELEASE ARCH    VARIANT BUILD
    ---
    alpine  3.1     amd64   default 20161124_17:50
    alpine  3.1     armhf   default 20161124_17:50
    alpine  3.1     i386    default 20161124_17:50
    alpine  3.2     amd64   default 20161124_17:50
    alpine  3.2     armhf   default 20161124_17:50
    alpine  3.2     i386    default 20161124_19:28
    alpine  3.3     amd64   default 20161124_17:50
    alpine  3.3     armhf   default 20161124_17:50
    alpine  3.3     i386    default 20161124_17:50
    alpine  3.4     amd64   default 20161124_17:50
    alpine  3.4     armhf   default 20161124_17:50
    alpine  3.4     i386    default 20161124_19:28
    alpine  edge    amd64   default 20161124_19:28
    alpine  edge    armhf   default 20161124_17:50
    alpine  edge    i386    default 20161124_17:50
    archlinux       current amd64   default 20161124_01:27
    archlinux       current i386    default 20161124_01:27
    centos  6       amd64   default 20161124_02:16
    centos  6       i386    default 20161124_02:16
    centos  7       amd64   default 20161124_02:16
    debian  jessie  amd64   default 20161123_22:42
    debian  jessie  arm64   default 20161123_22:42
    debian  jessie  armel   default 20161123_22:42
    debian  jessie  armhf   default 20161123_22:42
    debian  jessie  i386    default 20161123_22:42
    debian  jessie  powerpc default 20161123_22:42
    debian  jessie  ppc64el default 20161123_22:42
    debian  jessie  s390x   default 20161123_22:42
    debian  sid     amd64   default 20161123_22:42
    debian  sid     arm64   default 20161123_22:42
    debian  sid     armel   default 20161123_22:42
    debian  sid     armhf   default 20161123_22:42
    debian  sid     i386    default 20161123_22:42
    debian  sid     powerpc default 20161123_22:42
    debian  sid     ppc64el default 20161123_22:42
    debian  sid     s390x   default 20161123_22:42
    debian  stretch amd64   default 20161123_22:42
    debian  stretch arm64   default 20161123_22:42
    debian  stretch armel   default 20161123_22:42
    debian  stretch armhf   default 20161123_22:42
    debian  stretch i386    default 20161123_22:42
    debian  stretch powerpc default 20161104_22:42
    debian  stretch ppc64el default 20161123_22:42
    debian  stretch s390x   default 20161123_22:42
    debian  wheezy  amd64   default 20161123_22:42
    debian  wheezy  armel   default 20161123_22:42
    debian  wheezy  armhf   default 20161123_22:42
    debian  wheezy  i386    default 20161123_22:42
    debian  wheezy  powerpc default 20161123_22:42
    debian  wheezy  s390x   default 20161123_22:42
    fedora  22      amd64   default 20161124_01:27
    fedora  22      i386    default 20161124_01:27
    fedora  23      amd64   default 20161123_01:27
    fedora  23      i386    default 20161123_01:27
    fedora  24      amd64   default 20161124_01:27
    fedora  24      i386    default 20161123_01:27
    gentoo  current amd64   default 20161124_14:12
    gentoo  current i386    default 20161124_14:12
    opensuse        13.2    amd64   default 20161124_00:53
    oracle  6       amd64   default 20161124_11:40
    oracle  6       i386    default 20161124_11:40
    oracle  7       amd64   default 20161124_11:40
    plamo   5.x     amd64   default 20161123_21:36
    plamo   5.x     i386    default 20161123_21:36
    plamo   6.x     amd64   default 20161123_21:36
    plamo   6.x     i386    default 20161123_21:36
    ubuntu  precise amd64   default 20161124_03:49
    ubuntu  precise armel   default 20161124_03:49
    ubuntu  precise armhf   default 20161124_03:49
    ubuntu  precise i386    default 20161124_03:49
    ubuntu  precise powerpc default 20161124_03:49
    ubuntu  trusty  amd64   default 20161124_03:49
    ubuntu  trusty  arm64   default 20161124_03:49
    ubuntu  trusty  armhf   default 20161124_03:49
    ubuntu  trusty  i386    default 20161124_03:49
    ubuntu  trusty  powerpc default 20161124_03:49
    ubuntu  trusty  ppc64el default 20161124_03:49
    ubuntu  xenial  amd64   default 20161124_03:49
    ubuntu  xenial  arm64   default 20161124_03:49
    ubuntu  xenial  armhf   default 20161124_03:49
    ubuntu  xenial  i386    default 20161124_03:49
    ubuntu  xenial  powerpc default 20161124_03:49
    ubuntu  xenial  ppc64el default 20161124_03:49
    ubuntu  xenial  s390x   default 20161124_03:49
    ubuntu  yakkety amd64   default 20161124_03:49
    ubuntu  yakkety arm64   default 20161124_03:49
    ubuntu  yakkety armhf   default 20161124_03:49
    ubuntu  yakkety i386    default 20161124_03:49
    ubuntu  yakkety powerpc default 20161124_03:49
    ubuntu  yakkety ppc64el default 20161124_03:49
    ubuntu  yakkety s390x   default 20161124_03:49
    ubuntu  zesty   amd64   default 20161124_03:49
    ubuntu  zesty   arm64   default 20161124_03:49
    ubuntu  zesty   armhf   default 20161124_03:49
    ubuntu  zesty   i386    default 20161124_03:49
    ubuntu  zesty   powerpc default 20161124_03:49
    ubuntu  zesty   ppc64el default 20161124_03:49
    ubuntu  zesty   s390x   default 20161124_03:49
    ---
    
    Distribution: ubuntu
    Release: xenial
    Architecture: amd64
    
    Downloading the image index
    Downloading the rootfs
    Downloading the metadata
    The image cache is now ready
    Unpacking the rootfs
    
    ---
    You just created an Ubuntu container (release=xenial, arch=amd64, variant=default)
    
    To enable sshd, run: apt-get install openssh-server
    
    For security reason, container images ship without user accounts
    and without a root password.
    
    Use lxc-attach or chroot directly into the rootfs to set a root password
    or create user accounts.

Now you have succeeded to create an Ubuntu unprivileged container.

Configuration of the unprivileged container

Now every changes in the configuration have to be made where the unprivileged container reside. So in this example, the container is located:

  • ls $HOME/.local/share/lxc/ubuntu/config

Check man lxc.container.conf to get more options.

Start of the unprivileged container

Enter the following command:

  • lxc-start --name ubuntu --logfile $HOME/lxc_ubuntu.log --logpriority DEBUG

As it, if things go wrong, you will be able to track them in the log file.

Now you can connect to the container:

  • lxc-attach --name ubuntu

More caveats

Unwritten apart from:

* lxc-checkconfig Multiple /dev/pts instances: missing? Don't be alarmed: https://edmondscommerce.github.io/fedora-24-lxc-multiple-/dev/pts-instances-missing/


Caveat Emptor: The rest of this page is useful but definitely outdated if you are trying to get LXC to work on Debian “stretch” (testing) or “sid” (unstable). Proceed with caution.


Rootfs essentials

Typically your container will simply be installed with its rootfs pointing to /var/lib/lxc/<container>/rootfs.

Networking essentials

Typically containers can be given their own hardware device (from the host, phys) or can have a virtual device (veth) that is either put directly on a bridged network device that it shares with the host (uses the same DHCP server and addresses as the host) or put on a bridged network device that is masqueraded to the outside world and that is given an internal subnet.

The first case (phys) requires a physical device and is therefore not often used. It gives the container an address on the host's subnet as the second case.

The second case (veth with host-shared bridge) turns the host's ethernet device into a bridge (makes it part of a bridge) and allows the container access to the external network allowing it to acquire a DHCP address on the same network that the host is on.

The third case (veth with independent bridge) is the use case of the lxc-net package (utility, service) as part of lxc version 2.0 and implies the use of a masqueraded subnet (e.g. 10.0.3.0 as is the default for both lxc-net, the above mentioned patch and this documentation) on which the host takes address 10.0.3.1 and any containers take 10.0.3.2+.

There is no need to install lxc-net on Debian Stretch. It is part of the lxc package. For older versions, it could be acquired from the older GitHub repository or stripped from the official GitHub repository for LXC. Regardless, on Jessie it is probably more practical to write your own config (if you don't choose to upgrade the package to the version from Stretch) (a more complete masquerading setup also has benefits, but you could add this yourself (extra firewall rules)).

Be advised that the default debootstrap installation of Debian has networking configured as DHCP. On a system without DHCP server running (as started by lxc-net) on the bridge device (or on your local network, depending on setup) the booting of your container will hang for some 15 seconds until you configure the guest as static (or manual). The reason for this is that LXC injects the IP address of your container into its (running) kernel if you have configured it with a static IP(4/6?) address in the container configuration file at /var/lib/lxc/<container>/config. Static configuration of your container is therefore not required if you have already configured it on the outside (e.g. in your container's configuration file) because LXC will inject it into the guest (The guest therefore waits for DHCP for no reason (but the maintainers won't fix it for Jessie ;-)). The following command would fix it but is not going to be included in Jessie:

sed -i 's/dhcp/manual/' /var/lib/lxc/<container>/rootfs/etc/network/interfaces

This means that your network configuration is not completed until you either run an internal DHCP server (or have one for the outside) or you perform the above change in your masqueraded setup with your static IP configured in the container's configuration file.

Installation

Typically required packages are lxc, debootstrap and bridge-utils. libvirt-bin is optional.

  • apt install lxc debootstrap bridge-utils

Optionally:

  • apt install libvirt-bin

Preparing the host system for running LXC

For systems older than Jessie the host system needed to be primed for the running of LXC on it as it required cgroups to be mounted (among other things, perhaps).

You can skip this on Jessie or later. Your host(system) is already prepared.

For older releases, add this line to /etc/fstab. (This is not necessary if libvirt-bin is installed as init.d/libvirt-bin will mount /sys/fs/cgroup automatically).

cgroup  /sys/fs/cgroup  cgroup  defaults  0   0

Try to mount it (a reboot solves an eventual "resource busy problem" in any case)

mount /sys/fs/cgroup

Check kernel configuration :

# lxc-checkconfig
Kernel config /proc/config.gz not found, looking in other places...
Found kernel config file /boot/config-2.6.32-5-amd64
--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled

--- Control groups ---
Cgroup: enabled
Cgroup namespace: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: missing
Cgroup cpuset: enabled

--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Note : Before booting a new kernel, you can check its configuration
usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

Above the lxc-checkconfig program is reporting "Cgroup memory controller: missing". If you want memory control via cgroups then you need to recompile the linux kernel. In order to avoid problems when using memory limit settings during startup of a container, you must add cgroup_enable=memory to the kernel command line (Jessie or later). This applies even if Cgroup memory controller reports "enabled"

Creating your container's root filesystem

We can simply call this "creating your container". In this step your container is downloaded using debootstrap and a mimimum Debian system is installed on your rootfs location (/var/lib/lxc/<container>/rootfs). After this, your container is ready to be run, it is already completed.

Debian 8 "Jessie"

lxc-create -n <name> -t debian -- -r jessie

The -r stands for "release". You can also install other releases. The -r is a parameter that is passed to Debian's LXC script (template). It causes Jessie to be downloaded as the mimimum "debootstrap" Debian system.

<name> is the name you give your container. It can be anything, as you like.

Alternatively you can specify the language (locale) as required(?) and additionally you can specify the mirror to use for debootstrap in this way:

LANG=C SUITE=jessie MIRROR=http://httpredir.debian.org/debian lxc-create -n debian8 -t debian

This also passes "jessie" as an environment variable instead of as a parameter to the script (template). Scripts and templates are found in /usr/share/lxc/templates/.

Debian 7 "Wheezy"

LXC installs correctly on "Wheezy" (including a working Debian template since 7.4).

Use:

lxc-create -n myvm -t debian

which will prompt you on what distribution to install.

Then adapt network configuration in /var/lib/lxc/myvm/config, e.g. to plug it on libvirt's bridge:

lxc.utsname = myvm
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = virbr0
lxc.network.ipv4 = 0.0.0.0/24
lxc.network.hwaddr = 00:1E:62:CH:NG:ME

Other templates can be downloaded, before 7.4 we recommended the one referenced on the LXC container mailing list:

lxc-create -n myvm -t debian-wheezy
# or for a 32-bit container:
linux32 lxc-create -n myvm -t debian-wheezy

Issues in Debian 7 "Wheezy":

Setup networked containers

Start and stop containers

Notes/warnings on starting and stopping containers:

  • When you connect to a container console, lxc will let you know how to quit it. The first time you log in however, getty will clear the screen, so you'll probably miss this bit of information:

    Type <Ctrl+a q> to exit the console, <Ctrl+a Ctrl+a> to enter Ctrl+a itself
  • If you're using screen and also use the Ctrl+a command prefix, type <Ctrl+a a q> to exit the console.

  • <!> When you start the container in foreground mode (without -d), there's apparently no way to quit the terminal (<Ctrl+a q> doesn't work). Make sure you start the containers in background mode with -d, unless you need to debug why a container didn't start.

Actual commands:

  • To start a container in the background and attached to the console at any time later run (by default, login/password is root/root):

    lxc-start -n myvm -d
    lxc-console -n myvm
  • To start a container in foregroup mode and stay attached to the console run (see warning above):

    lxc-start -n myvm
  • To stop a container without proper halt inside the container:

    lxc-stop -n myvm

    For versions newer than in Jessie, you can instead instruct the container's init system to cleanly halt (see timeout note above):

    lxc-halt -n myvm

    For some versions, the above may yield telinit: timeout opening/writing control channel /run/initctl. Work-around: use lxc.cap.drop = sys_admin in the container config file.

  • To have containers automatically started on booting the host, edit their config file and add:

    lxc.start.auto = 1

    If your container is defined in a non-default path (e.g. you used the -P option to lxc-create), you must symlink your container into /var/lib/lxc for this to work.

    On hosts running a version of Debian/LXC newer than that in jessie, instead you should link their config file in /etc/lxc/auto/:

    ln -s /var/lib/lxc/mycontainer/config /etc/lxc/auto/mycontainer

Bind mounts inside the container

By default only the container's filesystem is mounted inside the container (even if on the host, /var/lib/lxc/mycontainer/rootfs has other mount points).

To mount another filesystem in the container, add to /var/lib/lxc/mycontainer/config:

lxc.mount.entry=/path/in/host/mount_point /var/lib/lxc/mycontainer/rootfs/mount_point none bind 0 0

and restart the container. The mount point will now be visible inside the container as well.

Both paths can be identical if necessary.

As of 2015-September-30 The recent security patches to fix CVE-2015-1335 have broken the use of absolute container mount points as shown above on some Debian derived systems. The use of relative container mount points still work and provide a workaround.

SO: for the near future you can use:

lxc.mount.entry=/path/in/host/mount_point mount_point none bind 0 0

instead of the suggestion above. NOTE that it is critical to have no leading "/" in the container mount point (making it a relative mount point).

Incompatibility with systemd

* The version in Wheezy (0.8.0~rc1-8+deb7u2) is not compatible with running systemd inside the container. See 766216 * The versions in both jessie and stretch support systemd in the container just fine for Debian guests. ** YMMV for other types of guests

Scenarios

Upgrading container from "Wheezy" to "Jessie"

When upgrading an lxc guest running "Wheezy" to "Jessie", the lxc VM will stop working, because at the time of writing (23.11.2014) systems will automatically be migrated to systemd. See 766233. This behaviour is being reviewed in 762194.

Workarounds:

Switch back to sysv

If the VM was migrated to systemd automatically via an upgrade then you can switch back to sysvinit:

lxc-stop -n myvm               # stop the vm
                               # or, if that doesn't work use lxc-kill

# the next step requires the VM to be mounted at /var/lib/lxc/myvm/root

chroot /var/lib/lxc/myvm/root  # chroot into the vm
apt-get install sysvinit-core # reinstall old sysvinit

Alternatively you can try to start the container in the foreground and do the same via the container's console as described in section Debian 8 "Jessie"/testing.

Not letting your system be updated to systemd during the upgrade

Before upgrade, run:

apt-get install sysvinit-core

or run the following command in place of a usual dist-upgrade:

apt-get dist-upgrade sysvinit-core

Reconfiguring updated VMs

Note that the following recipe only works on hosts running jessie. It will not work on hosts still running wheezy.

Add the following to your container config:

lxc.autodev = 1
lxc.kmsg = 0

Do the following in the guest.

Adjust getty@.service:

cp /lib/systemd/system/getty@.service /etc/systemd/system
# Comment out the line ConditionPathExists=/dev/tty0 in the copied getty@.service

The udev service (which is a hard dependency of systemd in Jessie) won't run in a container, however the systemd config will detect that sysfs is mounted read-only and will automatically skip udev startup. Apparently this was not the case with earlier versions of systemd and this page used to advise using systemctl to mask the udev and systemd-udev services - this is no longer necessary and may cause problems later (see 812932).

Creating new "Jessie" VMs

Creating new Jessie containers should work without issue.

Support

References

See also :

Known bugs and "got to know issues"

  • 600466 - "Respawning too fast" messages and can't connect to console due to missing tty(1234) nodes in generated container rootfs. Workaround: remove from container's /etc/inittab or start container in interactive mode and mknod -m 660 dev/tty1 c 5 1 for each required tty.

  • Some bugs that might apply to non-official containers - read the follow-ups for solutions.

  • "telinit: /run/initctl: No such file or directory" running lxc-halt?

    mknod -m 600 /var/lib/lxc/myvm/rootfs/run/initctl p

    and add "sys_admin" to the lxc.cap.drop line in /var/lib/lxc/myvm/config? See http://wiki.deimos.fr/LXC_:_Install_and_configure_the_Linux_Containers#telinit:_.2Frun.2Finitctl:_No_such_file_or_directory

  • 761197 - "systemd-journald eats CPU in lxc jessie container"
    As noted in the bug report, setting "lxc.kmsg=0" in "/var/lib/lxc/myvm/root" and removing "/dev/kmsg" inside the container seems to fix the problem.

  • if you encounter a "Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted", then you might consider this advice. The author of these lines here has not verified the accuracy of the proposed configuration settings - be it with respect to security or to other side effects, so please use your own judgement.

See also