Xen on Debian
This is the first page of a series, which should be read sequentially if you're discovering Xen.
But TL;DR : if you want a list of all pages related to Xen, click this.
UPDATE: some articles are NOT in the list below ! At first I wanted the debian-Xen wiki to be read like a book, "from zero to hero" as the saying goes, progression like (hello slackbook). But because "what you plan" and "what happens" are entities living on different planets, the wiki evolved differently !
The accurate, full list of articles related to Xen can be found on the Xen category page. (To wiki editors, as written in the comment of this very update: this landing page (wiki/Xen) should be rewritten, it's too messy atm. Ideas welcome, but please let's concert to not add a mess to the mess, read below).
- Introduction and overview (this page)
?DomUs configuration, install and debug
?Interacting with domUs
?PV drivers, USB- and PCI-passthrough
?Getting help
This page is a WIP, and missing pages are already written but in plain text, so need to be imported then converted to MoinMoin (need help).
You may come accross comments, recognizable as "[zit: some_text]" (hello, me zithro). They usually indicate "need-more-info", because I dunno. Or felt lazy ? Please fill if you know !
Don't be shy (I am), so if you need anything, drop a line on #debian-xen (IRC), or on the salsa doc issue, or contact me directly via email (to build my address, use "slack" as the username and "rabbit.lu" as the domain).
Xen overview
[zit: Disclaimer, the Xen overview paragraph was copy/pasted from the old wiki page, I just made some edits. Remove this line if you think it's useless, just wanted to give credit where it's due]
Xen is an open-source (GPL) type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host).
Some of Xen's key features are:
- Small footprint and interface (is around 1MB in size). Because Xen uses a microkernel design, with a small memory footprint and limited interface to the guest, it is more robust and secure than other hypervisors.
- Operating system agnostic: most installations run with Linux as the main control stack (aka "domain 0"). But a number of other operating systems can be used instead, including FreeBSD and NetBSD.
- Driver Isolation: Xen has the capability to allow the main device driver for a system to run inside of a virtual machine (called driver domain). If the driver crashes, or is compromised, the VM containing the driver can be rebooted and the driver restarted without affecting the rest of the system.
- Paravirtualization: Fully paravirtualized guests have been optimized to run as a virtual machine. This allows the guests to run much faster than with hardware extensions (HVM) [zit: still true ?]. Additionally, Xen can run on hardware that doesn't support virtualization extensions.
See the Xen Overview on the Xen wiki for more information.
Xen and Domain 0
Xen is the hypervisor, and runs directly on the hardware. It's responsible for handling CPU, Memory, timers and interrupts. It is the first program running after exiting the bootloader. A special domain, called Domain-0, or dom0 for short, contains drivers for the hardware, and the toolstack to control VMs, also called domUs. See dom0 as the "control domain" in the KVM world. Recent evolutions of Xen allow running the hypervisor without a dom0, the feature is called "hyperlaunch" (previously "dom0-less").
Guest types
Xen supports running two different types of guests: Paravirtualization (PV/PVH) and Full or Hardware assisted Virtualization (HVM). Both guest types can be used at the same time on a single Xen system. It is also possible to use techniques used for Paravirtualization in an HVM guest: essentially creating a continuum between PV and HVM.
Glossary
Xen uses some particular terminology that is useful to know before digging deeper.
Some generic terms, related to virtualization or not, are also described.
[zit: add to a "hover=give definition" thingy, if exists ?]
[zit: find another way to display names, they don't catch up the eye enough ?]
Debian, when referring to a host/machine, means the Debian OS installation existing before installing Xen.
Xen means the Xen hypervisor.
*nixes represents all UNIX-like OS, like the ones based on Linux, BSD, illumos, etc.
domain is usually representing a domU, but may sometimes also include dom0 (all domains != all guest domains).
<Domain>, used in Xen commands, represents either a domain name or id. There's also <Domain_name> and <Domain_id>
dom0, dom0/Debian represents a Debian install once it becomes a dom0, after the installation of the Xen hypervisor.
This is the host managing the hardware and the domUs lifecycle
domU, VM (virtual machine), and guest (domain) are interchangeable, they represent the same thing : an OS run virtualized by Xen.
PV, ParaVirtualization is the "full virtualization" mode of Xen. There is no emulated hardware, no QEMU (except dom0 ?). Guest kernels (OS) need to be adapted. It's the historical mode of Xen, a bit over 20 years old
HVM, for Hardware Virtual Machine, is a fully virtualized guest. Created after AMD/Intel developped SVM/VT-d. Needs QEMU to emulate a platform and some devices. Can virtualize unmodified guests (ie. not pre-configured to be run virtualized, like Windows).
PVH is a new mode, which is called the best of both worlds, ie. HVM and PV modes. AFAIU, it's HVM without QEMU, so without any emulation. Also needs modified OSes.
PVHVM or PV-on-HVM is not a mode per se, it's sometimes used to refer to HVM guests using PV drivers for performance.
PV/H is only used in this wiki, and means "PV and PVH", as they share some common configurations.
NIC is a network card (Network Interface Controller)
PCI-PT means PCI ?PassThrough, the fact to attribute a PCI device to a VM for its own use.
USB-PT means USB ?PassThrough, the fact to attribute an USB device to a VM for its own use.
SR-IOV (?SingleRoot-IOVirtualization) is a relatively new technique to split a single physical device into several logical ones (NICs and storage adapters for now).
Simply put, you can "PCI-PT" a single device to several domUs with proper isolation.
This requires compatible hardware, mostly found on server-grade devices, not really affordable for peasants yet (except maybe Intel Xe GPUs ?).
VIF means Virtual ?InterFace, a virtual NIC
VBD means Virtual Block Device, a virtual disk
device model is used when using QEMU, and represents various stuff depending on the context (vague ...)
virtio is a set of specs, providing generic virtual devices and their drivers for virtualized guests. Some Xen devices may be replaced by virtio ones. Advantages/inconvenients ?
Prerequisites / What you need to get started
[ Note: as of this writing, this is only considering the amd64 arch, I only have that to test Xen on.
But except for booting, devices and a few other stuff, the help here should apply to other archs too (ARM/RISCv/Power/...).
So the Debian packages are similar, hence you install and configure Xen the same way.
Contributions welcome!
This wiki (for now ?) does not contain information about using libvirt's "virt-manager" GUI application, most of the information here is to use Xen manually.
Although nothing prevents you from reading this guide to understand how Xen works, and yet use "virt-manager" ! ]
Xen works on servers, desktops, laptops, ?SoCs, on x86, ARM, RISCv, PPC.
Full virtualization (PV) should work on most platforms, provided Xen boots. For HVM and PVH, you would need to check your platform specs.
x86/amd64
- To use the hardware virtual extensions, you need to activate SVM (AMD) or VT-d (Intel) in the BIOS/UEFI. Required for HVM (PVH ?).
To check if it's available, rungrep -E "svm|vtd" /proc/cpuinfo
Note that this produces no output on a dom0 (ie. under Xen). If Xen is already installed, boot without the hypervisor. - For PCI passthrough, you need to activate the IOMMU in the BIOS/UEFI. You can get your IOMMU groups with
ls -l /sys/kernel/iommu_groups
Or use this script: Although I'm not sure if Xen "ignores" IOMMU groups, I know you can't see them from a dom0. PS: if you want to see them, same advice as above, boot without Xen.
ARM
[zit: ?! A device-tree ? A compatible device ? Find Xen links at least (wiki, ML, etc)]
Other archs
[zit: find Xen links at least (wiki, ML, etc). There are PPC and Risc-V ports, Deb can run those archs]
About the Debian Xen packages
You can find a bit more details about the Debian Xen, QEMU and GRUB packages on a salsa wiki page.
The Debian packages have a big advantage over upstream Xen : you can update dom0 -while- keeping your domUs running, and if the new version does not work, simply use the old one !
The update procedure will keep the current Xen hypervisor and tools, and install the new ones aside, so after an update, you will have two versions of each.
Conceptually, it works like how in Debian the previous kernel and its boot entries are kept after an update of said kernel: you can still boot your old one(s).
Being able to switch between previous/current Xen -and- previous/current kernels allow for safe (re)boots when the respective new versions have a problem.
The downside to this goodness is that, for now, there are no systemd units provided, only sysV init scripts.
A good explanation can be found on those (merged) bugs: 1039422 and 1029998, check for the part where Maxi quotes Knorrie. For even more in-depth info, the original bug is 1028251.
Once you're fully satisfied with the new Xen version, you can remove the old packages.
Compile options
It's sometimes useful to know with what options the binaries were built:
Xen: https://salsa.debian.org/xen-team/debian-xen/-/blob/master/debian/rules?ref_type=heads#L197
QEMU for Xen: https://salsa.debian.org/qemu-team/qemu/-/blob/master/debian/rules?ref_type=heads#L333
Kernel -> salsa link ? Run cat /boot/config-VER on a dom0/domU ?
[zit: avoid duplicating info here<->salsa]
Found a bug ?
One day, I hope to write a complete procedure !
Meanwhile, stick to this : please first submit a bug on the BTS (debian's Bug Tracking Software) before going upstream, or ask on #debian-xen (see Xen/Help [zit: add link when page created]).
Most of the times, the problem would come from upstream and not because of our integration of Xen into Debian, BUT it's still important that fellow users are aware of the problem !
Once on the BTS, the debian-Xen team will tell you how it should be handled.
[zit: if only "regular" bugs could be handled like XSAs ... But don't worry, Xen docs "are on their way"].
Use cases
Xen is an hypervisor usually associated with "server/cloud" usage, or "embedded" deployments on ?SoCs (automotive, etc), but it is also used by enthousiasts on consumer hardware for "user/desktop" installations.
The "server" use case consists in a very lightweight dom0 with minimal services, only used to manage Xen, the hardware ressources and the domUs.
The "embedded" use case looks quite similar (to me), except on systems like the RaspberryPi where you might want to use the GUI in dom0.
The "user" use case is like above but dom0 has a GUI, with X and a Desktop Environment and/or Window Manager, so you control dom0 locally, graphically.
This is like running "desktop hypervisors" (VirtualBox) : you use Debian as usual, but in addition you can run domUs with advanced features.
Qubes OS is the main example of this use case, and is using Xen as its hypervisor (but fedora as dom0).
Some trendy words about running a hypervisor could be "hosts consolidation", "PCI passthrough", "running devices in VMs with near native performance" or even "virtualized gaming".
If you look up words like "vfio" or "gaming on linux", you may end up with setups like the user case (except that few people talk about Xen, most use KVM. But the war isn't over ! ).
By default, dom0 has the control of (?:almost) all (?:PCI(e)) devices. PCI passthrough is a way to give the control to a VM.
It is often related to VGA/GPU passthrough in the "user setups" communities, but it's also useful for HBAs, NICs, USB controllers, audio cards, etc.
For example, passthrough-ing a HBA to a storage server, or a NIC to a firewall.
You can also consolidate your network into one system for many benefits, like savings on the electric bills, and ... silence !
Some domUs will be your servers (firewall, NAS, network services, ...), some domUs will be "user" hosts, for usual stuff (like browsing, dev, multimedia, etc), and you would use dom0 to access the domUs (via command line or GUI), and if you want like any regular Debian install.
This is less secure than running Qubes, but risks may be acceptable for consumer uses.
Or use one dom0 in "server mode" for your network services, and one dom0 "user mode" for your user needs. Or one dom0 for private stuff, and a dom0 for work stuff.
In conclusion, there may be as much use cases as users !
Quickstart
This section is a TL;DR to install Xen and test basic domains start/stop.
If you have problems or want a light or customized installation (no GUI stuff, etc), check the corresponding sections in this wiki, which are more complete.
To try Xen quickly :
0. install Debian, whatever the flavor (on existing installs, take care of other hypervisors) 1. install Xen 2. reboot into the hypervisor+dom0 3. configure and create (start) dummy domUs
Install the hypervisor
apt install xen-hypervisor
reboot
In grub, select "Debian GNU/Linux, with Xen hypervisor" (it should be preselected). Once logged in dom0, get information about Xen :
xl info
xl info | grep virt_caps
virt_caps : pv hvm hvm_directio pv_directio hap shadow
Create simple domUs
Let's test the 3 choices provided by Xen: HVM, PV and PVH.
VNC is optional, but useful to see what's happening.
Example files are provided below, although you can find equivalents in /etc/xen.
HVM
Official example: /etc/xen/xlexample.hvm
# cat /vm/hvm-domu.cfg name = "hvm-domu" type = "hvm" vcpus = 1 memory = 512 vif = [ 'bridge=xenbr0,mac=00:16:3e:de:b1:a0,vifname=debhvm' ] boot = 'cdn' #vnc = 1 #vnclisten = "127.0.0.1:1"
PV
Official example: /etc/xen/xlexample.pvlinux
# cat /vm/pv-domu.cfg name = "pv-domu" type = "pv" vcpus = 1 memory = 512 vif = [ 'bridge=xenbr0,mac=00:16:3e:de:b1:a1,vifname=debpv,type=vif'] #vfb = [ "vnc=1,vnclisten=127.0.0.1:2" ] kernel = "/boot/vmlinuz-6.1.0.18-amd64" cmdline = "root=/dev/xvda1" # OR one of bootloader = "/usr/bin/pvgrub32" bootloader = "/usr/bin/pvgrub64" # do not use anymore: bootloader = "/usr/bin/pygrub"
PVH
Official example: /etc/xen/xlexample.pvhlinux
# cat /vm/pvh-domu.cfg name = "pvh-domu" type = "pvh" vcpus = 1 memory = 512 vif = [ 'bridge=xenbr0,mac=00:16:3e:de:b1:a2,vifname=debpvh,type=vif' ] #vfb = [ "vnc=1,vnclisten=127.0.0.1:3" ] kernel = "/usr/lib/grub-xen/grub-i386-xen_pvh.bin" # OR kernel = "/boot/vmlinuz-6.1.0.18-amd64" cmdline = "root=/dev/xvda1"
Let's run them now
# start the domU
xl create /path/to/config-file.cfg
# check if it's running, watch the ressources usage
xl list
xl top
# shut them down (forcefully, as no OS is installed yet)
# <Domain> means domain id or name
xl destroy <Domain>
xl destroy hvm-domu
xl destroy 1
If everything works, perfect, you can now better configure Xen, dom0 and your domUs !