systemd-nspawn may be used to run a command or OS in a light-weight container. In many ways it is similar to chroot, but more powerful since it uses namespaces to fully virtualise the the process tree, IPC, hostname, domain name and, optionally, networking and user databases.
It is similar to LXC, but much simpler to configure. Most of the necessary software is already installed on contemporary Debian systems.
The host (i.e. the system hosting one or more containers) needs to have the systemd-container package installed.
$ apt install systemd-container
$ echo 'kernel.unprivileged_userns_clone=1' >/etc/sysctl.d/nspawn.conf $ systemctl restart systemd-sysctl.service
Creating a Debian Container
# debootstrap --include=systemd,dbus stable /var/lib/machines/debian I: Target architecture can be executed I: Retrieving InRelease I: Checking Release signature ...
You will want probably want to ensure that root can log into the container:
$ systemd-nspawn -D /var/lib/machines/debian -U --machine debian Spawning container buster on /var/lib/machines/debian. Press ^] three times within 1s to kill container. Selected user namespace base 818610176 and range 65536. # set root password root@debian:~# passwd New password: Retype new password: passwd: password updated successfully # allow login via local tty # may not be needed any more # 'pts/0' seems to be used when doing systemd-nspawn --boot, 'pts/1' with machinectl login root@debian:~# printf 'pts/0\npts/1\n' >> /etc/securetty # logout from container root@debian:~# logout Container debian exited successfully.
Booting a Container
Once it has been created, it is possible to boot a container using an instantiated systemd.service, machinectl or the systemd-nspawn command:
# The part after the @ must match the container name used in the previous step $ systemctl start systemd-nspawn@debian # or $ machinectl start debian # or $ systemd-nspawn --boot -U -D /var/lib/machines/debian
These must be run as root. The file /etc/systemd/... can be used to set options for the container, see the man-page for systemd-nspawn for more information
Once booted, use machinectl shell to log in a second time.
Checking the status of containers
To check the state of containers, use one of the following commands:
$ machinectl list MACHINE CLASS SERVICE OS VERSION ADDRESSES debian container systemd-nspawn debian 10 - # or $ systemctl status systemd-nspawn@debian ● firstname.lastname@example.org - Container debian Loaded: loaded (/lib/systemd/system/systemd-nspawn@.service; disabled; vendor preset: enabled) Active: active (running) since ...
Logging into a Container
To login to a running container:
$ machinectl login debian Connected to machine debian. Press ^] three times within 1s to exit session. Debian GNU/Linux 10 debian pts/0 debian login:
Stopping a Container
To stop a running container from the host, do:
$ systemctl stop systemd-nspawn@debian
Alternatively, you can, e.g., halt: inside the container itself.
$ machinectl login debian Connected to machine debian. Press ^] three times within 1s to exit session. Debian GNU/Linux 10 debian pts/0 debian login: root Password: <something> Last login: Wed Jan 22 21:53:00 CET 2020 on pts/1 Linux debian 5.4.0-3-amd64 #1 SMP Debian 5.4.13-1 (2020-01-19) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. root@debian:~# halt ... Machine debian terminated
or by pressing control and ] three times.
There are many options for creating or using networks between the host, container and other systems. As with any network link you need to configure the * network interface * ip address at each end * firewall rules * dns
By default, the container will share the network of the host. This means that, subject to the firewall on the host, it can access the internet. Packets from the container are automatically masqueraded to appear that they come from the host. This needs no extra configuration, but gives the least separation between container and host.
If --private-networking option is given, the host communicates with the container using a virtual interface named ve-<container_name>@if<X> while the guest uses a virtual interface named host@if<Y>:
(on host) $ ip a show dev ve-debian 77: ve-debian@if2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether ... brd ff:ff:ff:ff:ff:ff link-netnsid 1
Enable and start systemd-networkd.service on the host and in the container to automatically provision the virtual link via DHCP and allow the container to access the host's network.
Alternatively the interfaces can be configured manually - you will need to configure IP forwarding and masquerading.
A port in the container can be made reachable from outside using the Port=tcp:hostport:containerport option in the .nspawn [Network] section. The port will then be reachable from outside the host but not via the host's own 127.0.0.1.
To configure the dns server used by the container you may want to enable and start systemd-resolved on both the host and the container.
Using host networking
You can disable private networking and make the container use host networking instead by adding following lines to /etc/systemd/nspawn/container-name.nspawn :
Replace 'container-name' with a name of your container.
Look here for more info
Using programs with Xorg
If you want to allow applications inside your container to use your host's X server and session, you need to specify the DISPLAY environment variable inside the container. A good way to do so interactively is using the -E option:
$ systemd-nspawn -E DISPLAY="$DISPLAY" ...
However, the container now knows about the display but does not have any privileges. One possible way to allow access to your X server is using xhost. Note that you often find xhost + in tutorials on the web. Do not use this command, it actually disables access control so that potentially anybody anywhere can connect to your X server. To revert it use xhost -.
If you use a single-user machine, you may want to use the following variant which allows any connection from localhost only:
$ xhost +local: non-network local connections being added to access control list
It's possible to passthrough the configuration needed in the container. See the Arch Linux wiki for one option.
It is possible to run Firefox within a Debian container, with the graphical output being sent to an Xorg server running on the host. This means both an exploit in Firefox, and Xorg or the Linux container system, would be required to infiltrate the rest of the computer. This example should work with any Linux distro, though, if not running Debian, it is strongly advised that Debian's OpenPGP keyring is imported first.
Create the container, in the traditional container location (/var/lib/machines), by simply running:
# debootstrap --force-check-gpg --include=systemd-container stable /var/lib/machines/deb-firefox/ https://deb.debian.org/debian
Note: you may prefer to change 'stable' to a specific release name, if you are looking for a specific package.
Once finished, install the only package needed on the host (Xorg server should already be setup on host):
# apt-get install systemd-container
Execute a shell in the container:
# systemd-nspawn --private-users=pick --private-users-chown -D /var/lib/machines/deb-firefox/
Note: the private-users options will automatically adjust all the users in the container to high UIDs & GIDs, including the container's root user.
Unfortunately there are many bugs in modern browsers, and, for minimalism, debootstrap does not include the security repository by default. So, by using the shell within the container, we need to add it, install security updates, then install Firefox minimally, then end the container session:
(container's root) # echo 'deb https://deb.debian.org/debian-security/ stable-security main' >> /etc/apt/sources.list (container's root) # apt-get update && apt-get dist-upgrade -y (container's root) # apt-get install --no-install-recommends -y firefox-esr (container's root) # exit
Note: --no-install-recommends will prevent the install of things that are not necessary in the container, e.g. graphics drivers, Xorg server, etc. and so halves the download size.
$ xhost +local:
Note: this will not persist across reboots, and will allow any local user on the host to access anything on the Xorg server, until you run xhost -local: later.
We are now ready to run Firefox. Execute the following script as root/sudo. This will start the container and run Firefox as the main process (as oppose to a shell, as we did previously):
#!/bin/sh systemd-nspawn --setenv=DISPLAY=:0 \ --bind-ro=/tmp/.X11-unix/ \ --private-users=pick \ --private-users-chown \ -D /var/lib/machines/deb-firefox/ \ --as-pid2 firefox-esr
Notice that an Xorg-server directory is being shared from the host to the container via bind mount. Notice also we are assuming the host's variable of DISPLAY=:0. However, this variable increments each time Xorg is killed and then restarted e.g. with startx (e.g. to DISPLAY=:1).
After a little loading, Firefox should now appear in your Xorg session. You may prefer to close your Xorg server to new connections now with xhost -local:, or view current connections with xlsclients.
From the host, verify that Firefox is running as a high UID user:
# ps -eo user,pid,command | grep firefox
systemd-nspawn [...] --as-pid2 firefox-esr
Notice the high UID in the 'USER' column.
One drawback to this setup is that updates to Firefox will have to be done manually, by executing a shell (as done previously), and running apt-get update && apt-get -y dist-upgrade. This is because the way we are running the container is very minimal, to minimize performance loss, but this means there is nothing else running in the container that can check and install updates (e.g. via crontab).
Persistence within stateless containers
If you would like the container to lose changes on exit, though there are systemd-nspawn options such as --volatile (incompatible with --private-users-chown) and --ephemeral (UID & GID issues when trying to maintain state), realistically for the best user experience you will need to just copy from a freshly installed container, such as with rm and cp, before running the container. To not to lose your Firefox profile, you will have to store the Firefox profile elsewhere on the host.
Make a directory for storing your Firefox profile, find the automatically assigned high GID/UID, and adjust the owner & group of the directory to match the root in the container:
# mkdir /var/lib/machines/firefox-profile/ # ls -lhd /var/lib/machines/deb-firefox/root/ # chown your_high_uid_here:your_high_gid_here /var/lib/machines/firefox-profile/
We can then bind mount this directory into the container, so that it can store data across container erasures:
#!/bin/sh ## if [ $1 -eq 1 ] ## then ## rm -r /var/lib/machines/deb-firefox/ ## cp -rp /var/lib/machines/deb-firefox-new/ /var/lib/machines/deb-firefox/ ## fi systemd-nspawn --setenv=DISPLAY=:0 \ --bind-ro=/tmp/.X11-unix/ \ --private-users=pick \ --private-users-chown \ -D /var/lib/machines/deb-firefox/ \ --bind=/var/lib/machines/firefox-profile/:/root/.mozilla/ \ --as-pid2 firefox-esr
Note that statelessness will only occur if the script is executed with an argument of 1, but you could invert -eq to -ne.
Now, the container can be non-persistent across start/stops, but the Firefox profile can retain persistence.
This setup was inspired by OpenBSD's Firefox defaults. Full example last tested: 2022-09-28
PulseAudio will not work out of the box if you need it. Make sure that you have necessary libraries installed (e.g. by apt install pulseaudio).
You probably want the container to use the host's PulseAudio server. Find out the PulseAudio UNIX socket. Note: There's an article in the Gentoo wiki on how to allow multiple userse to use one PulseAudio server at the same time.
When you start the container, you need to bind the host socket in the file system of the guest and pass an environment variable PULSE_SERVER that defines where the socket is in the guest. Example:
$ systemd-nspawn -E PULSE_SERVER="unix:/pulse-guest.socket" --bind=/pulse-host.socket:/pulse-guest.socket ...