Differences between revisions 13 and 16 (spanning 3 versions)
Revision 13 as of 2020-06-18 21:13:29
Size: 6579
Editor: Crsi
Comment: Improved and shortened the passage about X
Revision 16 as of 2021-08-05 19:39:36
Size: 7836
Editor: DhyanNataraj
Comment:
Deletions are marked like this. Additions are marked like this.
Line 136: Line 136:
Enable and start systemd-networkd.service on the host and in the container to automatically provision the virtual link via DHCP with routing onto host's external network interfaces. Enable and start [[SystemdNetworkd|systemd-networkd.service]] on the host and in the container to automatically provision the virtual link via DHCP with routing onto host's external network interfaces.
Line 139: Line 139:

=== Using host networking ===

You can disable private networking and make nspawn container to use host networking instead by adding following lines to /etc/systemd/nspawn/container-name.nspawn :

{{{
[Network]
VirtualEthernet=no
}}}

Replace 'container-name' with a name of your container.

Look [[https://wiki.archlinux.org/title/Systemd-nspawn#Use_host_networking|here]] for more info
Line 159: Line 173:
== PulseAudio Tweaks ==

[[PulseAudio]] will not work out of the box if you need it. Make sure that you have necessary libraries installed (e.g. by `apt install pulseaudio`).

You probably want the container to use the host's PulseAudio server. Find out the PulseAudio UNIX socket. Note: There's an article in the [[https://wiki.gentoo.org/wiki/PulseAudio#Allow_multiple_users_to_use_PulseAudio_concurrently|Gentoo wiki]] on how to allow multiple userse to use one PulseAudio server at the same time.

When you start the container, you need to bind the host socket in the file system of the guest and pass an environment variable `PULSE_SERVER` that defines where the socket is in the guest. Example:

{{{
$ systemd-nspawn -E PULSE_SERVER="unix:/pulse-guest.socket" --bind=/pulse-host.socket:/pulse-guest.socket ...
}}}

systemd-nspawn

About systemd-nspawn

systemd-nspawn may be used to run a command or OS in a light-weight namespace container. In many ways it is similar to chroot, but more powerful since it fully virtualizes the file system hierarchy, as well as the process tree, the various IPC subsystems and the host and domain name.

This mechanism is also similar to LXC, but is much simpler to configure and most of the necessary software is already installed on contemporary Debian systems.

Host Preparation

The host (i.e. the system hosting one or more containers) needs to have the systemd-container package installed.

$ apt-get install systemd-container

The host should also have unprivileged user namespaces enabled (see the documentation for an explanation of why, note that some consider this a security risk):

$ echo 'kernel.unprivileged_userns_clone=1' >/etc/sysctl.d/nspawn.conf
$ systemctl restart systemd-sysctl.service

Creating a Debian Container

Each guest OS should also have the systemd-container package installed. A suitable guest OS installation may created using the debootstrap or cdebootstrap tools. For example, to create a new guest OS called debian:

$ debootstrap --include=systemd-container stable /var/lib/machines/debian
I: Target architecture can be executed
I: Retrieving InRelease
I: Checking Release signature
...

After debootstrap finishes, it is necessary to login to the newly created container and make some changes to allow root logins:

$ systemd-nspawn -D /var/lib/machines/debian -U --machine debian
Spawning container buster on /var/lib/machines/debian.
Press ^] three times within 1s to kill container.
Selected user namespace base 818610176 and range 65536.

# set root password
root@debian:~# passwd
New password:
Retype new password:
passwd: password updated successfully

# allow login via local tty
root@debian:~# echo 'pts/1' >> /etc/securetty  # May need to set 'pts/0' instead

# logout from container
root@debian:~# logout
Container debian exited successfully.

Booting a Container

Once it has been setup, it is possible to boot a container using an instantiated systemd.service:

# The part after the @ must match the container name used in the previous step
$ systemctl start systemd-nspawn@debian

Checking Container State

To check the state of containers, use one of the following commands:

$ machinectl list
MACHINE CLASS     SERVICE        OS     VERSION ADDRESSES
debian container systemd-nspawn debian 10      -

# or
$ systemctl status systemd-nspawn@debian
● systemd-nspawn@debian.service - Container debian
   Loaded: loaded (/lib/systemd/system/systemd-nspawn@.service; disabled; vendor preset: enabled)
   Active: active (running) since ...

Logging into a Container

To login to a running container:

$ machinectl login debian
Connected to machine debian. Press ^] three times within 1s to exit session.

Debian GNU/Linux 10 debian pts/0

debian login:

Stopping a Container

To stop a running container from the host, do:

$ systemctl stop systemd-nspawn@debian

Alternatively, you can stop the container from within the guest OS by running e.g. halt:

$ machinectl login debian
Connected to machine debian. Press ^] three times within 1s to exit session.

Debian GNU/Linux 10 debian pts/0

debian login: root
Password: <something>
Last login: Wed Jan 22 21:53:00 CET 2020 on pts/1
Linux debian 5.4.0-3-amd64 #1 SMP Debian 5.4.13-1 (2020-01-19) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@debian:~# halt
...
Machine debian terminated

Networking

The host communicates with the guest container using a virtual interface named ve-<container_name>@if<X> while the guest uses a virtual interface named host@if<Y> for the same purposes:

$ ip a show dev ve-debian
77: ve-debian@if2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ... brd ff:ff:ff:ff:ff:ff link-netnsid 1

Enable and start systemd-networkd.service on the host and in the container to automatically provision the virtual link via DHCP with routing onto host's external network interfaces.

Alternatively the interfaces can be configured manually, e.g. to setup IP forwarding, masquerading, etc.

Using host networking

You can disable private networking and make nspawn container to use host networking instead by adding following lines to /etc/systemd/nspawn/container-name.nspawn :

[Network]
VirtualEthernet=no

Replace 'container-name' with a name of your container.

Look here for more info

Using programs with Xorg

The container does not have any knowledge of your host's X server at first. If you want to run applications inside your container that should be able to use your host's X server and session, you need to specify the DISPLAY environment variable. A good way to do so interactively is using the -E option:

$ systemd-nspawn -E DISPLAY="$DISPLAY" ...

However, the container now knows about the display but does not have any privileges. One possible way to allow access to your X server is using ?xhost. Note that you often find xhost + in tutorials on the web. Do not use this command, it actually disables access control so that potentially anybody anywhere can connect to your X server. To revert it use xhost -.

If you use a single-user machine, you may want to use the following variant which allows any connection from localhost only (non-network):

$ xhost +local:
non-network local connections being added to access control list

It's possible to passthrough the configuration needed in the container. See the Arch Linux wiki for one option.

PulseAudio Tweaks

PulseAudio will not work out of the box if you need it. Make sure that you have necessary libraries installed (e.g. by apt install pulseaudio).

You probably want the container to use the host's PulseAudio server. Find out the PulseAudio UNIX socket. Note: There's an article in the Gentoo wiki on how to allow multiple userse to use one PulseAudio server at the same time.

When you start the container, you need to bind the host socket in the file system of the guest and pass an environment variable PULSE_SERVER that defines where the socket is in the guest. Example:

$ systemd-nspawn -E PULSE_SERVER="unix:/pulse-guest.socket" --bind=/pulse-host.socket:/pulse-guest.socket ...


CategorySoftware | CategoryVirtualization | CategorySystemAdministration