PREAMBLE

Work in progress. May be messy.

Installation and boot

This is a two step process: install Debian, then install Xen. So first, install Debian ! Any working install can be used as a dom0.
I would only recommend to remove other hypervisors already installed, to prevent unwanted behaviour (dom0/hypervisor boot errors, virt not working, etc).
This means KVM, bhyve, etc. You can keep the libvirt packages, and virtualbox can be kept installed, although will only work if booting Debian without Xen.
If removing other hypervisors, make sure you really need to apt purge them. Maybe you can disable them without uninstalling them, check their documentation.
Not purging allows you to revert (switch ?) to your old virtualization system quickly.

Once you have a working Debian system, you can install the hypervisor.
As with any package, you can choose to install all recommended packages and be done with it, or customize by installing only a few chosen packages.

The easy route is :

apt install xen-hypervisor

This meta-package will install all Xen required packages and most of the associated tools like QEMU, virtual BIOS/UEFI, grub-xen, bridge-utils, etc, for your architecture.
You will have all required tools to run HVM, PV and PVH domUs. The command above is considered with "Recommended packages" auto installed (the default in Debian).

It will also setup the bootloader (GRUB on amd64) to boot by default into Xen.
For now don't configure anything (use the defaults), and after the installation, reboot Debian, BUT when in GRUB, press an arrow key to remove the timer, to observe the new menu items.
You will have two options (per kernel in Advanced options) :

Booting into Xen is the default, but you can configure this behaviour.
Booting into "Debian only" skips booting the hypervisor, to run plain Debian, un-virtualized, allowing you to reconfigure/remove/reinstall Xen (or dom0/Debian) in case of problems, but not to run any domU.
Also, other virtualization platforms should work : boot into "plain" Debian and run for example Virtualbox VMs.

Some explanation about the difference between Xen the hypervisor, and dom0 the "hardware and control domain".
Excerpt from Xen doc: (admin-guide/introduction.html)

If you want a lightweight or specialized Xen install, you could use apt's "--dry-run" switch to inspect what packages are going to be installed, and then tweak your install using "--no-install-recommends" and listing individual packages manually.
A sane basis can be :

Then, depending on if you want to only run HVM domUs, or only PV/H, or both, you would need to pull additional packages: mainly qemu-* and related for HVM.
(An overview of Xen and related packages is available here).

Configure Xen and dom0

You can configure :

Command lines

Kernel

For the Linux kernel and the GRUB bootloader, you set options as usually in the "/etc/default/grub" file, in GRUB_CMDLINE_LINUX_DEFAULT or GRUB_CMDLINE_LINUX. Consult your kernel documentation for a full list.
You can set "normal" dom0 options (hardware support, cryptsetup, etc), but you may be directed to set some options here to allow your setup to work with Xen.
For example, you could set "xen-pciback=(42:00.0)" to prepare the device for PCI passthrough, or "console=hvc0 earlyprintk=xen" to get serial console output.

Xen

The Xen command line is in "/etc/default/grub.d/xen.cfg", the lines GRUB_CMDLINE_XEN or GRUB_CMDLINE_XEN_DEFAULT.
There are many options available, a few are described directly in the file. Check "misc/xen-command-line.html" in xen docs.
This is an example :

Hypervisor options

The config file is /etc/xen/xl.conf.
Most options in the file have a comment to explain what they do, else do man xl.conf or grep the docs.
For default installations, you don't have to edit anything.
You can set dom0 auto ballooning, the default bridge for domUs, the CPU affinity, etc.
Better check the official man pages concerning those options !

For example :

means that when you create a domU without assigning a bridge in its config file, it will automatically use this one.

If you use CPU pinning, you can set the "global vcpu hard affinity masks" for domUs, like

This would reserve vCPUs 0-1 for dom0 usage by preventing any type of domU to run on them. Useful in combination with Xen command line options dom0_max_vcpus and dom0_vcpus_pin.

To disable dom0 memory ballooning, which is recommended in some cases :

Networking in dom0

Networking may be available or not for dom0, on the same subnets or on separate ones than the domUs.
But for your domains to access an external network, dom0 must be configured, one way or another.

There are a few modes of network operations in Xen: bridging, routing, and NAT, another may be NIC PCI passthrough.
You can use any combination of them, this really depends on your setup.
Some reference on the Xen wiki. If one of your guests needs access to a specific NIC, use PCI passthrough and let the guest OS handle networking.

Once you forget the concept of virtual machines, it's like setting up a real network with physical hosts, except lines in config files replace Ethernet routers, switches, network cards and cables !
The routers/switches are declared in "backends" config files, so /etc/network/interfaces{,.d/} in Debian, and the NICs and cables in domUs config files.
As it has more to do with networking than with virtualization, the information here may be quite succint.

dom0 is the default network backend, but "Network driver domains" can act as backends too.
In a domU backend, you would configure networking/bridging like you would do on dom0. It could use a virtual or passthrough-ed interface. Then dom0 may have no network interaction at all with those domUs.
You can also use SR-IOV to share a single NIC between several domains, but that requires special hardware.

Bridging

In this mode, you create bridges in dom0 (or a domU), and then "hotplug" domUs virtual NICs on them.
Bridges may be seen as virtual switches, they allow real and virtual interfaces to communicate together.
You can have as many bridges as you like, depending on your needs and config.
You can link them to a real/physical NIC adapter or not. If not, the bridge will only allow domUs to communicate together.

In Debian, the network configuration is done in /etc/network/interfaces for permanent usage.
Of course you can also create temporary bridges with ip (or brctl, deprecated) and assign domUs on those bridges for quick tests.

Example of a simple bridge using a real NIC adapter :

# the real NIC adapter
iface eth0 inet manual

# example bridge with PHY binding and IP
auto xenbr0
iface xenbr0 inet manual
bridge_ports eth0
bridge_stp off
bridge_waitport 0
bridge_fd 0
# bridge IP
address 10.0.0.1/24
gateway 10.0.0.254
post-up ip link set $IFACE address 00:16:3e:d0:d0:d0

This bridge will allow any domU which vNIC is also on this bridge to use the "eth0" network adapter to communicate with external hosts.
dom0 is also using this adapter for external access, as the bridge has an IP (and a gateway).

If you have two NICs (or more) in your system, you can reserve one for dom0 management purposes, and the others for the domUs :

# management network, dom0 only
iface eth0 inet manual

auto br-mgmt0
iface br-mgmt0 inet dhcp
  bridge_ports eth0
  bridge_stp off
  bridge_waitport 0
  bridge_fd 0

# domUs network, for all vNICs
iface eth1 inet manual

auto br-domu0
iface br-domu0 inet manual
  bridge_ports eth1
  bridge_stp off
  bridge_waitport 0
  bridge_fd 0

Dom0 can communicate with remote hosts via the eth0 NIC only, and the domUs via eth1 only.
Note that here the management bridge is useless : you could use eth0 directly (iface eth0 inet dhcp) if dom0 is the only host using this NIC. Using a bridge is only useful if some domUs will also use the management bridge.

Just take care when using several bridges, inter-bridge communication is not usually possible !
Although you can create a domU with interfaces on both bridges which can act as a router, or even use dom0 to do that (kernel routing and iptables/nftables).

If you need advanced functions for bridges, you can use openvswitch.
VLAN-aware bridging is also possible.

Other network modes

The routing and NAT modes use the dom0 kernel capabilities to act as a gateway for domUs. Like bridging, you could also use a domU for that.
Check the Xen wiki for more information. Or Debian networking guides on this wiki.

Storage

Storage is needed for :

There are several solutions, and don't really depend on Xen, rather on dom0 and/or the platform bootloader. Also on your personal needs, choices, and hardware capabilities.
Uses cases go from "small" SoCs storing a few domains locally, to cloud providers live-migrating gazillions of them between specialized networked storage arrays !

If you can boot Debian, you should be good to go. Plain ext4, LVM, ZFS, root-on-ZFS, NFS, iSCSI, FC, RAID, (SMB ?), etc, pick your poison(s) !
I've never read that Xen wouldn't start from some underlying storage solution, but what do I know ! Use what works for you.
Once Xen and dom0 are booted, the magic of *nixes allow you to run domUs from any mounted folder in dom0.
So in the Debian wiki, "stored on dom0 filesystem" means dom0 can access the storage, the how is unimportant.
"Light" setups use a single drive for everything. A server may be diskless and "netboots".

Xen and dom0

The hypervisor is stored in /boot, along dom0's kernels, initrds and bootloader files (grub or equivalent). On Debian, you may have two versions of the hypervisor in there.
Sample tree of /boot:

grub [directory]
config-6.1.0-11-amd64
config-6.1.0-17-amd64
initrd.img-6.1.0-11-amd64
initrd.img-6.1.0-17-amd64
System.map-6.1.0-11-amd64
System.map-6.1.0-17-amd64
vmlinuz-6.1.0-11-amd64
vmlinuz-6.1.0-17-amd64
xen-4.14-amd64.config
xen-4.17-amd64.config
xen-4.14-amd64.efi
xen-4.17-amd64.efi
xen-4.14-amd64.gz
xen-4.17-amd64.gz

This example is from a "standard PC". Not sure how ARM boots.

domUs

For domUs, the simplest method is a raw or qcow disk image file stored on dom0's filesystem.
Or disk images accessed via NFS or iSCSI and stored on a remote host, or via QEMU's nbd.
This remote storage host can be a local domU, using for example a PCI passthrough'ed storage adapter.
Just take care of the chicken-egg problem, a domU FS can't be stored inside itself !

The 9p file system

This is the simplest method to share a filesystem between domains. Can be seen like Xen's version of Virtualbox' "shared folders".
You can share a dom0 (or any other backend) folder for any other domU to access it, without the need for networked file systems, hence works for domUs with no network.
For more info, read the page in this wiki that talks about domUs configuration, or if not (yet) available, the Xen docs.


CategoryVirtualization CategoryXen