Xen on Debian

This is the first page of a series, which may or not be read sequentially.

It's a WIP, missing pages are partially written but in plain text, so need to be imported then converted to MoinMoin.
There are a few [zit:comments] left (hello, me zithro). They usually indicate "need-more-info", because I dunno. Or felt lazy ? Fill if you know ! Or drop a line on #debian-xen or on the salsa doc issue


Xen overview

[zit: Disclaimer, the Xen overview paragraph was copy/pasted from the old wiki page, I just made some edits. Remove this line if you think it's useless, just wanted to give credit where it's due]

Xen is an open-source (GPL) type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host).

Some of Xen's key features are:

See the Xen Overview on the Xen wiki for more information.

Xen and Domain 0

Xen is the hypervisor, and runs directly on the hardware. It's responsible for handling CPU, Memory, timers and interrupts. It is the first program running after exiting the bootloader. A special domain, called Domain-0, or dom0 for short, contains drivers for the hardware, and the toolstack to control VMs, also called domUs. See dom0 as the "control domain" in the KVM world. Recent evolutions of Xen allow running the hypervisor without a dom0, the feature is called "hyperlaunch" (previously "dom0-less").

Guest types

Xen supports running two different types of guests: Paravirtualization (PV/PVH) and Full or Hardware assisted Virtualization (HVM). Both guest types can be used at the same time on a single Xen system. It is also possible to use techniques used for Paravirtualization in an HVM guest: essentially creating a continuum between PV and HVM.

Glossary

Xen uses some particular terminology that is useful to know before digging deeper.
Some generic terms, related to virtualization or not, are also described.
[zit: add to a "hover=give definition" thingy, if exists ?]

[zit: find another way to display names, they don't catch up the eye enough ?]

Debian, when referring to a host/machine, means the Debian OS installation existing before installing Xen.
Xen means the Xen hypervisor.
*nixes represents all UNIX-like OS, like the ones based on Linux, BSD, illumos, etc.

domain is usually representing a domU, but may sometimes also include dom0 (all domains != all guest domains).
<Domain>, used in Xen commands, represents either a domain name or id. There's also <Domain_name> and <Domain_id>

dom0, dom0/Debian represents a Debian install once it becomes a dom0, after the installation of the Xen hypervisor.

domU, VM (virtual machine), and guest (domain) are interchangeable, they represent the same thing : an OS run virtualized by Xen.

PV, ParaVirtualization is the "full virtualization" mode of Xen. There is no emulated hardware, no QEMU (except dom0 ?). Guest kernels (OS) need to be adapted. It's the historical mode of Xen, a bit over 20 years old
HVM, for Hardware Virtual Machine, is a fully virtualized guest. Created after AMD/Intel developped SVM/VT-d. Needs QEMU to emulate a platform and some devices. Can virtualize unmodified guests (ie. not pre-configured to be run virtualized, like Windows).
PVH is a new mode, which is called the best of both worlds, ie. HVM and PV modes. AFAIU, it's HVM without QEMU, so without any emulation. Also needs modified OSes.

PVHVM or PV-on-HVM is not a mode per se, it's sometimes used to refer to HVM guests using PV drivers for performance.
PV/H is only used in this wiki, and means "PV and PVH", as they share some common configurations.

NIC is a network card (Network Interface Controller)
PCI-PT means PCI ?PassThrough, the fact to attribute a PCI device to a VM for its own use.
USB-PT means USB ?PassThrough, the fact to attribute an USB device to a VM for its own use.
SR-IOV (?SingleRoot-IOVirtualization) is a relatively new technique to split a single physical device into several logical ones (NICs and storage adapters for now).

VIF means Virtual ?InterFace, a virtual NIC
VBD means Virtual Block Device, a virtual disk

device model is used when using QEMU, and represents various stuff depending on the context (vague ...)
virtio is a set of specs, providing generic virtual devices and their drivers for virtualized guests. Some Xen devices may be replaced by virtio ones. Advantages/inconvenients ?

Prerequisites / What you need to get started

[ Note: as of this writing, this is only considering the amd64 arch, I only have that to test Xen on.
But except for booting, devices and a few other stuff, the help here should apply to other archs too (ARM/RISCv/Power/...).
So the Debian packages are similar, hence you install and configure Xen the same way.
Contributions welcome!

This wiki (for now ?) does not contain information about using libvirt's "virt-manager" GUI application, most of the information here is to use Xen manually.
Although nothing prevents you from reading this guide to understand how Xen works, and yet use "virt-manager" ! ]

Xen works on servers, desktops, laptops, ?SoCs, on x86, ARM, RISCv, PPC.
Full virtualization (PV) should work on most platforms, provided Xen boots. For HVM and PVH, you would need to check your platform specs.

x86/amd64

To check if it's available, run
grep -E "svm|vtd" /proc/cpuinfo
Note that this produces no output on a dom0 (ie. under Xen). If Xen is already installed, boot without the hypervisor.

You can get your IOMMU groups with :
ls -l /sys/kernel/iommu_groups
Or use this script :

   1 #!/bin/bash
   2 echo "IOMMU groups:"
   3 shopt -s nullglob
   4 for g in `find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V`; do
   5         echo "IOMMU Group ${g##*/}:"
   6         for d in $g/devices/*; do
   7         echo -e "\t$(lspci -nns ${d##*/})"
   8         done;
   9 done;

Although I'm not sure if Xen "ignores" IOMMU groups, I know you can't see them from a dom0.

PS: if you want to see them, same advice as above, boot without Xen.

ARM

[zit: ?! A device-tree ? A compatible device ? Find Xen links at least (wiki, ML, etc)]

Other archs

[zit: find Xen links at least (wiki, ML, etc). There are PPC and Risc-V ports, Deb can run those archs]

About the Debian Xen packages

You can find a bit more details about the Debian Xen, QEMU and GRUB packages on a salsa wiki page.

The Debian packages have a big advantage over upstream Xen : you can update dom0 -while- keeping your domUs running, and if the new version does not work, simply use the old one !
The update procedure will keep the current Xen hypervisor and tools, and install the new ones aside, so after an update, you will have two versions of each.
Conceptually, it works like how in Debian the previous kernel and its boot entries are kept after an update of said kernel: you can still boot your old one(s).

Being able to switch between previous/current Xen -and- previous/current kernels allow for safe (re)boots when the respective new versions have a problem.
The downside to this goodness is that, for now, there are no systemd units provided, only sysV init scripts.
A good explanation can be found on those (merged) bugs: 1039422 and 1029998, check for the part where Maxi quotes Knorrie. For even more in-depth info, the original bug is 1028251.

Once you're fully satisfied with the new Xen version, you can remove the old packages.

Compile options

It's sometimes useful to know with what options the binaries were built:
Xen: https://salsa.debian.org/xen-team/debian-xen/-/blob/master/debian/rules?ref_type=heads#L197
QEMU for Xen: https://salsa.debian.org/qemu-team/qemu/-/blob/master/debian/rules?ref_type=heads#L333
Kernel -> salsa link ? Run cat /boot/config-VER on a dom0/domU ?

[zit: avoid duplicating info here<->salsa]

Found a bug ?

Please first submit a bug on the BTS before going upstream, or ask on #debian-xen (see Xen/Help)

Use cases

Xen is an hypervisor usually associated with "server/cloud" usage, or "embedded" deployments on ?SoCs (automotive, etc), but it is also used by enthousiasts on consumer hardware for "user/desktop" installations.

The "server" use case consists in a very lightweight dom0 with minimal services, only used to manage Xen, the hardware ressources and the domUs.
The "embedded" use case looks quite similar (to me), except on systems like the RaspberryPi where you might want to use the GUI in dom0.

The "user" use case is like above but dom0 has a GUI, with X and a Desktop Environment and/or Window Manager, so you control dom0 locally, graphically.
This is like running "desktop hypervisors" (VirtualBox) : you use Debian as usual, but in addition you can run domUs with advanced features.
Qubes OS is the main example of this use case, and is using Xen as its hypervisor (but fedora as dom0).

Some trendy words about running a hypervisor could be "hosts consolidation", "PCI passthrough", "running devices in VMs with near native performance" or even "virtualized gaming".
If you look up words like "vfio" or "gaming on linux", you may end up with setups like the user case (except that few people talk about Xen, most use KVM. But the war isn't over ! ).

By default, dom0 has the control of (?:almost) all (?:PCI(e)) devices. PCI passthrough is a way to give the control to a VM.
It is often related to VGA/GPU passthrough in the "user setups" communities, but it's also useful for HBAs, NICs, USB controllers, audio cards, etc.
For example, passthrough-ing a HBA to a storage server, or a NIC to a firewall.

You can also consolidate your network into one system for many benefits, like savings on the electric bills, and ... silence !
Some domUs will be your servers (firewall, NAS, network services, ...), some domUs will be "user" hosts, for usual stuff (like browsing, dev, multimedia, etc), and you would use dom0 to access the domUs (via command line or GUI), and if you want like any regular Debian install.
This is less secure than running Qubes, but risks may be acceptable for consumer uses.
Or use one dom0 in "server mode" for your network services, and one dom0 "user mode" for your user needs. Or one dom0 for private stuff, and a dom0 for work stuff.

In conclusion, there may be as much use cases as users !

Quickstart

This section is a TL;DR to install Xen and test basic domains start/stop.
If you have problems or want a light or customized installation (no GUI stuff, etc), check the corresponding sections in this wiki, which are more complete.

To try Xen quickly :

0. install Debian, whatever the flavor (on existing installs, take care of other hypervisors) 1. install Xen 2. reboot into the hypervisor+dom0 3. configure and create (start) dummy domUs

Install the hypervisor

apt install xen-hypervisor
reboot

In grub, select "Debian GNU/Linux, with Xen hypervisor" (it should be preselected). Once logged in dom0, get information about Xen :

xl info
xl info | grep virt_caps
       virt_caps              : pv hvm hvm_directio pv_directio hap shadow

Create simple domUs

Let's test the 3 choices provided by Xen: HVM, PV and PVH.
VNC is optional, but useful to see what's happening.
Example files are provided below, although you can find equivalents in /etc/xen.

HVM

Official example: /etc/xen/xlexample.hvm

# cat /vm/hvm-domu.cfg
name = "hvm-domu"
type = "hvm"
vcpus = 1
memory = 512
vif = [ 'bridge=xenbr0,mac=00:16:3e:de:b1:a0,vifname=debhvm' ]
boot = 'cdn'
#vnc = 1
#vnclisten = "127.0.0.1:1"

PV

Official example: /etc/xen/xlexample.pvlinux

# cat /vm/pv-domu.cfg
name = "pv-domu"
type = "pv"
vcpus = 1
memory = 512
vif = [ 'bridge=xenbr0,mac=00:16:3e:de:b1:a1,vifname=debpv,type=vif']
#vfb = [ "vnc=1,vnclisten=127.0.0.1:2" ]

kernel = "/boot/vmlinuz-6.1.0.18-amd64"
cmdline = "root=/dev/xvda1"
# OR one of
bootloader = "/usr/bin/pvgrub32"
bootloader = "/usr/bin/pvgrub64"
# do not use anymore: bootloader = "/usr/bin/pygrub"

PVH

Official example: /etc/xen/xlexample.pvhlinux

# cat /vm/pvh-domu.cfg
name = "pvh-domu"
type = "pvh"
vcpus = 1
memory = 512
vif = [ 'bridge=xenbr0,mac=00:16:3e:de:b1:a2,vifname=debpvh,type=vif' ]
#vfb = [ "vnc=1,vnclisten=127.0.0.1:3" ]

kernel = "/usr/lib/grub-xen/grub-i386-xen_pvh.bin"
# OR
kernel = "/boot/vmlinuz-6.1.0.18-amd64"
cmdline = "root=/dev/xvda1"

Let's run them now

# start the domU
xl create /path/to/config-file.cfg

# check if it's running, watch the ressources usage
xl list
xl top

# shut them down (forcefully, as no OS is installed yet)
# <Domain> means domain id or name
xl destroy <Domain>
xl destroy hvm-domu
xl destroy 1

If everything works, perfect, you can now better configure Xen, dom0 and your domUs !


CategoryVirtualization CategoryXen