Differences between revisions 178 and 265 (spanning 87 versions)
Revision 178 as of 2011-06-16 13:02:26
Size: 35910
Editor: ?Hyacinthe Cartiaux
Comment: ordering
Revision 265 as of 2021-09-05 15:21:56
Size: 25503
Editor: ThiagoPezzo
Comment: add pt_BR link in translation header
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
||<tablewidth="100%"style="border: 0px hidden ;">~-Translation(s): [[id/Xen|Indonesian]] -~ ||<style="border: 0px hidden ; text-align: right;"> (!) [[/Discussion]] || ||<tablewidth="100%"style="border: 0px hidden ;">~-[[DebianWiki/EditorGuide#translation|Translation(s)]]: [[id/Xen|Indonesian]] - [[es/Xen|Español]] - [[pt_BR/Xen|Português (Brasil)]] -~ ||<style="border: 0px hidden ; text-align: right;"> (!) [[/Discussion]] ||
Line 5: Line 5:
= Xen Overview =
Modern computers are sufficiently powerful to use virtualization to present the illusion of many smaller virtual machines (VMs), each running a separate operating system instance. Successful partitioning of a machine to support the concurrent execution of multiple operating systems poses several challenges. Firstly, virtual machines must be isolated from one another: it is not acceptable for the execution of one to adversely affect the performance of another. This is particularly true when virtual machines are owned by mutually untrusting users. Secondly, it is necessary to support a variety of different operating systems to accommodate the heterogeneity of popular applications. Thirdly, the performance overhead introduced by virtualization should be small.

'''Xen''' is a virtual machine monitor for x86 that supports execution of multiple guest operating systems with unprecedented levels of performance and resource isolation. Xen is Open Source software, released under the terms of the GNU General Public License. We have a fully functional ports of Linux 2.6 running over Xen, and regularly use it for running demanding applications like MySQL, Apache and PostgreSQL. Any Linux distribution (!RedHat, SuSE, Debian, Mandrake) should run unmodified over the ported OS.

In addition to Linux, members of Xen's user community have contributed or are working on ports to other operating systems such as NetBSD (Christian Limpach), FreeBSD (Kip Macy) and Plan 9 (Ron Minnich).


== Different types of virtualization offered by Xen ==

There are two different types of virtualization offered by Xen:
 * Para-virtulization and
 * Hardware-supported virtualization

=== Para-virtualization ===
A term used to describe a virtualization technique that allows the operating system to be aware
that it is running on a hypervisor instead of base hardware. The operating system must be
modified to accommodate the unique situation of running on a hypervisor instead of basic
hardware.

=== Hardware Virtual Machine ===
A term used to describe an operating system that is running in a virtualized environment unchanged and unaware that it is not running directly on the hardware. Special hardware is required to allow this, thus the term HVM.

(Source: What is Xen Hypervisor, [[http://www.xen.org|www.xen.org]])


= Installation on squeeze =

= Xen overview =
Xen is an open-source (GPL) type-1 or baremetal [[http://en.wikipedia.org/wiki/Hypervisor|hypervisor]], which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host)

Some of Xen's key features are:
 * Small footprint and interface (is around 1MB in size). Because Xen uses a microkernel design, with a small memory footprint and limited interface to the guest, it is more robust and secure than other hypervisors.
 * Operating system agnostic: Most installations run with Linux as the main control stack (aka "domain 0"). But a number of other operating systems can be used instead, including NetBSD and OpenSolaris.
 * Driver Isolation: Xen has the capability to allow the main device driver for a system to run inside of a virtual machine. If the driver crashes, or is compromised, the VM containing the driver can be rebooted and the driver restarted without affecting the rest of the system.
 * Paravirtualization: Fully paravirtualized guests have been optimized to run as a virtual machine. This allows the guests to run much faster than with hardware extensions (HVM). Additionally, Xen can run on hardware that doesn't support virtualization extensions.

See the [[http://wiki.xen.org/wiki/Xen_Overview|Xen Overview]] on the Xen wiki for more information.

== Guest types ==

Xen supports running two different types of guests: Paravirtualization (PV) and Full or Hardware assisted Virtualization (HVM). Both guest types can be used at the same time on a single Xen system. It is also possible to use techniques used for Paravirtualization in an HVM guest: essentially creating a continuum between PV and HVM. This approach is called PV on HVM. Again see the [[http://wiki.xen.org/wiki/Xen_Overview|Xen Overview]] on the Xen wiki for more information.

== Domain 0 ==

Xen has a special domain called domain 0 which contains drivers for the hardware, as well as the toolstack to control VMs. Domain 0 is often referred to as `dom0`.

= Domain 0 (host) installation =

== Initial installation ==

Before installing Xen you should install Debian on the host machine. This installation will form the basis of Domain 0.

Installing Debian can be done in the usual way using the [[DebianInstaller]]. See the [[http://www.debian.org/releases/stable/releasenotes|Debian Release Notes]] for more information on installing Debian.

In order to install Xen you will either a [[http://www.debian.org/releases/stable/i386/release-notes/|32-bit PC (i386)]] or [[http://www.debian.org/releases/stable/amd64/release-notes/|64-bit PC (amd64)]] installation of Debian. Although it is recommended to always run a 64-bit hypervisor note that this does not mean one has to run a 64-bit domain 0. It is quite common to run a 32-bit domain 0 on a 64-bit hypervisor (a so-called "32on64" configuration).

In general you can install your Domain 0 Debian as you would any other Debian install. The main thing to consider is the partition layout since this will have an impact on the disk configurations available to the guests. The Xen wiki has some [[http://wiki.xen.org/wiki/Host_OS_Install_Considerations|Host OS Installation Considerations]] which may be of interest. To paraphrase that source: if your Domain 0 Debian system will be primarily used to run guests, a good rule of thumb is to set aside 4GB for the domain 0 root filesystem (/) and some swap space (swap=RAM if RAM<=2GB; swap=2GB if RAM>2GB). The swap space should be determined by the amount of RAM provided to Dom0, see [[#dom0mem|Configure Domain 0 Memory]]

Use the rest of the disk space for a LVM physical volume.

If you have one disk, the following is a reasonable setup:
create 3 physical partitions: sda1, sda2, sda3. The root (ext4) and swap will be on the first two and the remainder will be under Logical Volume Management (lvm). With the LVM setup, create 1 physical volume and then one volume group. Give the volume group a name, such as `vg0'.

== Installing Xen packages ==
Line 35: Line 46:
The setup described here is tested for Debian Lenny and Ubuntu Maverick virtual machines, but should work for a lot more.

== Dom0 (host) ==

First install the hypervisor, xen kernel and xen-tools. This can be done by a metapackage:

{{{
aptitude -P install xen-linux-system
}}}

To get Xen HVM support [[http://wiki.xensource.com/xenwiki/Xen4.0|Xen 4.0 Wiki]]
The setup described here is tested for Debian Squeeze and Ubuntu Maverick virtual machines, but should work for a lot more.

First install the hypervisor, xen aware kernel and xen tools. This can be done by a metapackage:

{{{
apt-get install xen-linux-system
}}}

Since Debian Wheezy, it's better to install this metapackage :

{{{
apt-get install xen-system
}}}

== Checking for hardware HVM support ==

Hardware-assisted virtualization, requires CPU support for either the extension: AMD Secure Virtual Machine (AMD Virtualisation; AMD-V); or Intel Virtual Machine Extensions (VT-x).

On your intended host system, you can run this command:

{{{
egrep '(vmx|svm)' /proc/cpuinfo
}}}

On squeeze (but not on wheezy), providing the necessary emulation infrastructure for an HVM guest, the qemu device model package is also required:
Line 50: Line 75:
Debian Squeeze uses Grub 2 whose default is to list normal kernels first, and only then list the Xen hypervisor and its kernels. See also: [[DebianBug:603832|#603832]]

To make the Xen hypervisor (and not just a Xen-ready kernel!) boot by default, it should be the first entry, so do swap the order of kernel detection in GRUB:

{{{
mv -i /etc/grub.d/10_linux /etc/grub.d/21_linux
== Prioritize booting Xen over native Linux ==

=== Buster ===

Is not required, package set the priority booting

=== Stretch ===

A patch could be required, if you use a not english localized server.
More information on https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=865086

=== Wheezy / Squeeze ===

Debian Wheezy and Squeeze use [[Grub]] 2 whose default is to list normal kernels first, and only then list the Xen hypervisor and its kernels.

You can change this to cause Grub to prefer to boot Xen by changing the priority of Grub's Xen configuration script (`20_linux_xen`) to be higher than the standard Linux config (`10_linux`). This is most easily done using `dpkg-divert`:
{{{
dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen
}}}
to undo this:
{{{
dpkg-divert --rename --remove /etc/grub.d/20_linux_xen
}}}

After any update to the Grub configuration you must apply the configuration by running:
{{{
Line 59: Line 104:
To avoid getting boot entries for each virtual machine you install on a volume group, disable the GRUB OS prober.

/!\ ToDo: does this problem still happen and under what circumstances? Bug number?

Note that if you are running a computer with multi-boot with for example Windows, this will also remove the entries for it, which might not be what you wish for.

Edit /etc/default/grub and add:
{{{
# Disable OS prober to prevent virtual machines on logical volumes from appearing in the boot menu.
GRUB_DISABLE_OS_PROBER=true
}}}

You may also want to pass some boot parameters to Xen when starting up in normal or recovery mode. Add these variables to /etc/default/grub to achieve this:
== Networking ==

In order to give network access to guest domains it is necessary to configure the domain 0 network appropriately. The most common configuration is to use a software bridge.

It is recommended that you manage your own network bridge using the [[BridgeNetworkConnections|Debian network bridge]]. The Xen wiki page [[http://wiki.xen.org/wiki/Host Configuration/Networking|Host Configuration/Networking]] also has some useful information. The Xen supplied network scripts are not always reliable and will be removed from a later version. They are disabled by default in Debian's packages.

If you have a router that assigns ip addresses through dhcp, the following is a working example of the `/etc/network/interfaces` file using bridge-utils software.

{{{
#The loopback network interface
auto lo
iface lo inet loopback

iface eth0 inet manual

auto xenbr0
iface xenbr0 inet dhcp
   bridge_ports eth0

#other possibly useful options in a virtualized environment
  #bridge_stp off # disable Spanning Tree Protocol
  #bridge_waitport 0 # no delay before a port becomes available
  #bridge_fd 0 # no forwarding delay

## configure a (separate) bridge for the DomUs without giving Dom0 an IP on it
#auto xenbr1
#iface xenbr1 inet manual
# bridge_ports eth1
}}}

== Other configuration tweaks ==

=== Domain 0 memory ===
<<Anchor(dom0mem)>>
By default on a Xen system the majority of the hosts memory is assigned to dom0 on boot and dom0's size is dynamically modified ("ballooned") automatically in order to accommodate new guests which are started.

However on a system which is dedicated to running Xen guests it is better to instead give dom0 some static amount of RAM and to disable ballooning.

The following examples use 1024M.

In order to do this you must first add the `dom0_mem` option to your hypervisor command line. This is done by editing `/etc/default/grub` and adding
{{{
# Xen boot parameters for all Xen boots
GRUB_CMDLINE_XEN="dom0_mem=1024M,max:1024M"
}}}
at the bottom of the file.

Note : On servers with huge memory, Xen kernel crash. You must set a dom0 memory limit. Take care on Wheezy, 1024M is not enough and cause kernel crash at boot with out-of-memory message.

Remember to apply the change to the grub configuration by running `update-grub`!

Then edit `/etc/xen/xend-config.sxp` to configure the toolstack to match by changing the following settings:
{{{
(dom0-min-mem 1024)
(enable-dom0-ballooning no)
}}}

With the new xl toolstack, edit {{{/etc/xen/xl.conf}}} and disable autoballoon with {{{autoballoon="0"}}}

At this point you should reboot so that these changes take effect.

=== Domain 0 CPUs ===
There are some useful tweaks of dom0 cpu utilization.

By default all CPUs are shared among dom0 and all domU (guests). It may broke dom0 responsibility if guests consume too much CPU time.
To avoid this, it is possible to grant one (or more) processor core to dom0 and also pin it to dom0.

Add following options to /etc/default/grub to allocate one cpu core to dom0:
{{{
dom0_max_vcpus=1 dom0_vcpus_pin
}}}

Make such changes in /etc/xen/xend-config.sxp:
{{{
(dom0-cpus 1)
}}}

=== Guest behaviour on host reboot ===

By default, when Xen dom0 shuts down or reboots, it tries to save (i.e. hibernate) the state of the domUs. Sometimes there are problems with that - it could fail because of a lack of disk space in /var, or because of random software bugs. Because it is also clean to just have the VMs shutdown upon host shutdown, if you want you can make sure they get shut down normally by setting these parameters in /etc/default/xendomains:

{{{
XENDOMAINS_RESTORE=false
XENDOMAINS_SAVE=""
}}}

=== Boot parameters ===

You may also want to pass some boot parameters to Xen when starting up in normal or recovery mode. Add these variables to `/etc/default/grub` to achieve this:
Line 79: Line 200:
After editing GRUB configuration, you must apply it by running:
{{{
update-grub
}}}

By default, when Xen dom0 shuts down or reboots, it tries to save the state of the domUs. Sometimes there are problems with that and because it is also clean to just have the VMs shutdown upon host shutdown, if you want you can make sure they get shut down normally by setting these parameters in /etc/default/xendomains:

{{{
XENDOMAINS_RESTORE=false
XENDOMAINS_SAVE=""
}}}

In /etc/xen/xend-config.sxp enable the network bridge by commenting out the line that was already there for that. (You may check [[http://wiki.xensource.com/xenwiki/XenNetworking| XenNetworking]] page in Xen wiki.)

{{{
(network-script 'network-bridge antispoof=yes')
}}}

The antispoof=yes will activate Xen firewall to prevent that one of your VM uses an IP that it is not allowed to use (for example, if a domU was to use the gateway as its IP, it could seriously break your network, this will prevent it). In this case, you will need to specify the IP of your domU in the vif statement of your domUs.

If enabling the Xen network bridge does not work, try enabling the Debian network bridge. Details [[http://wiki.debian.org/BridgeNetworkConnections | here]].

If you get "missing vif-script, or network-script, in the Xen configuration file", try adding executable permission in:
{{{
chmod +x /etc/xen/scripts/*}}}

This config file also has options to set the memory and CPU usage for your dom0, which you might want to change.
To reduce dom0 memory usage as it boots, use the dom0_mem kernel option in the aforementioned GRUB_CMDLINE_XEN variable. Xen wiki also advise to disable dom0 memory ballooning and set minimal memory in /etc/xen/xend-config.sxp (1024M is an example) :

{{{
(dom0-min-mem 1024)
(enable-dom0-ballooning no)
}}}

=== Serial console access ===
Remember to apply the change to the grub configuration by running `update-grub`!

More information on the available hypervisor command line options can be found in the [[http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html|upstream documentation]].

=== PCI pass-through parameters ===

''This information is incomplete for Squeeze and needs to be updated for Wheezy''

To enable PCI pass-through, you need to know the BDF (Bus, Device, Function) id of the device. This is obtained through the `lspci` command, with the output containing the BDF in the format: (BB:DD.F) at the start of the line. To hide a device from Dom0 you will need to pass these boot parameters to Xen when starting. For example if using a Dom0 with 512M of memory and two devices at 01:08.1 and 01:09.2, add these variables to `/etc/default/grub` to achieve this:
{{{
# Xen boot parameters for all Xen boots
GRUB_CMDLINE_XEN="dom0_mem=512M pciback.hide=(01:08.1)(01:09.2)"
# Xen boot parameters for non-recovery Xen boots (in addition to GRUB_CMDLINE_XEN)
GRUB_CMDLINE_XEN_DEFAULT="something else"
}}}

for Squeeze use "pciback.hide" (kernels < 2.6.32.10), for Wheezy (I have not tested this yet) use "xen-pciback.hide"

''for Squeeze you need to pass all of the devices on the bus, eg to pass any device on the 01:DD.F bus, you have pass all of them: (01:08.1)(01:09.2)(01:09.3)etc.''

Remember to apply the change to the grub configuration by running `update-grub`!

At least in Wheezy (not tested in Squeeze) the xen-pciback module needs to be configured through modprobe.conf and added to the initramfs additionally.

Configure the xen-pciback module by adding a modprobe include file (e.g. `/etc/modprobe.d/xen-pciback.conf`) with the following content (given that the PCI device would be assigned to module e1000e normally):
{{{
install e1000e /sbin/modprobe xen-pciback; /sbin/modprobe --first-time --ignore-install e1000e
options xen-pciback hide=(0000:03:00.0)
}}}
Add the xen-pciback module to initramfs by adding it to `/etc/initramfs/modules` and running `update-initramfs -u` afterwards.

Please note that pci-passthrough is broken when MSI are enabled (default) in Linux kernels < 3.14. Use Linux kernel >= 3.14 in DomU/VM or set pci=nomsi for DomU/VM kernel as workaround.
See the following thread for detailed information: http://thread.gmane.org/gmane.comp.emulators.xen.user/81944/focus=191437

=== Serial console ===
Line 127: Line 248:
Here's what I used to configure the serial console (for a Supermicro X8STi-F motherboard with IPMI and SOL):

{{{
GRUB_CMDLINE_XEN="loglvl=all guest_loglvl=all com1=115200,8n1,0x3e8,5 console=com1,vga"
GRUB_CMDLINE_LINUX="console=hvc0 earlyprintk=xen"
}}}
Line 135: Line 263:
With systemd, you do not have an {{{/etc/inittab}}} any more. systemd will spawn a getty on {{{/dev/hvc0}}} if you specify {{{console=hvc0}}} on the kernel command line.
Line 139: Line 269:
== DomU (guests) ==

If you want, you can also use tools that allow easy setup of virtual machine such as:

 * xen-tools - ''apt-get install xen-tools''
 * dtc-xen can also be used for that, if you disable its SOAP daemon (you would disable it using: update-rc.d -f dtc-xen remove). DTC-Xen also offers installation of CentOS VMs using yum, which might be handy as well.

Once you have a functional dom0 host machine, you can create virtual machines with this command:

{{{
xen-create-image --hostname <hostname> --ip <ip> --scsi --vcpus 2 --pygrub --dist <lenny|maverick|whatever>
}}}

To configure xen-tools, you can edit /etc/xen-tools/xen-tools.conf which contains default values that the xen-create-image script will use.

Read the xen-create-image manual page for information on the available options.

In case you use xen-tools with ''--role'' be aware of [[DebianBug:588783|#588783]].

These are some real-life examples of params that may need to be changed:

{{{
# Virtual machine disks are created as logical volumes in volume group 'universe' (hint: LVM storage is much faster than file)
lvm = universe
 
size = 50Gb # Disk image size.
memory = 512Mb # Memory size
swap = 2Gb # Swap size
 
# Default gateway and netmask for new VMs
gateway = x.x.x.x
netmask = 255.255.255.0
 
# When creating an image, interactively setup root password
If you need to debug Xen and see a crash dump of the kernel, you can do it using IPMITool if your server has SOL:

{{{
ipmitool -I lanplus -H server-ip-address -U your-username sol activate | tee my-log-file.txt
}}}

= DomU (guest) installation =

== Using xen-tools ==

DebianPkg:xen-tools is a set of scripts which can easily create fully configured Xen guest domains.

Once you have installed dom0 you can install xen-tools on your host with:
{{{
apt-get install xen-tools
}}}

To configure xen-tools, you can edit `/etc/xen-tools/xen-tools.conf` which contains default values that the xen-create-image script will use. The xen-create-image(8) manual page contains information on the available options.

To give a different path where the domU images being saved and enable the superuser password in the initial build, we will edit the `/etc/xen-tools/xen-tools.conf` file and uncomment this lines:

{{{
dir = /home/xen/
Line 174: Line 293:
  # Let xen-create-image use pygrub, so that the grub from the VM is used, which means you no longer need to store kernels outside the VMs. Keeps things very flexible.
pygrub=1
}}}

== Possible problems and bugs ==

 * If your domU kernel happens to miss support for the xvda* disk devices (the xen-blkfront driver), use the --scsi option that makes the VM use normal SCSI HD names like sda*. You can also set scsi=1 in /etc/xen-tools/xen-tools.conf to make this the default.

 * Debian Bug [[DebianBug:584152|#584152]] Error during xen-create-image: {{{mkfs.ext3: /lib/libblkid.so.1: version `BLKID_2.17' not found (required by mkfs.ext2)}}}. Solve this by downgrading the mkfs tool.

 * Error starting xend 3.4. Use the 4.x from current squeeze packages. [[http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1620|Xen Bug #1620]]


== Upgrading/transition ==
}}}

Then you can create virtual machines with this command:
{{{
xen-create-image --hostname <hostname> --ip <ip> --vcpus 2 --pygrub --dist <lenny|squeeze|maverick|whatever>
}}}

To start the created VM run the command:

{{{
xl create /etc/xen/<hostname>.cfg
}}}

To erase a VM image (even the main directory) run the command:

{{{
xen-delete-image VMs_name
}}}

=== Possible problems and bugs ===

 * If your domU kernel happens to miss support for the `xvda*` disk devices (the `xen-blkfront` driver), use the `--scsi` option that makes the VM use normal SCSI HD names like `sda*`. You can also set `scsi=1` in `/etc/xen-tools/xen-tools.conf` to make this the default.

 * When using xen-tools with ''--role'' on Squeeze be aware of [[DebianBug:588783|#588783]]: 'Should mount /dev/pts when creating image'. This is fixed in Wheezy.

== Using Debian Installer ==

The Xen wiki page [[http://wiki.xen.org/wiki/Debian_Guest_Installation_Using_Debian_Installer|Debian Guest Installation Using DebianInstaller]] contains instructions on how to install Xen DomU from Lenny onwards using [[Debian Installer]].

== Booting guests ==

The Xen wiki page [[http://wiki.xen.org/wiki/PvGrub2|PvGrub2]] contains instructions on how to use PV Grub2 with PV guests from Jessie onwards.

=== Upgrading domU kernels ===

A trivial way to boot a domU is to use a kernel in /boot of dom0. Unfortunately, this means that to upgrade the kernel used by a domU, one needs to synchronize with the dom0 kernel. I.e. if the dom0 kernel is upgraded, then the domU will not be able to load its kernel modules since the version will be different. Conversely, if the kernel is upgraded inside the domU, i.e. its kernel modules get upgraded, then the dom0 kernel will not fit either.

One way is to upgrade the kernel inside the domU, and propagate by hand the kernel image into a separate directory of dom0, where the configuration for the domU will look for it.

A better way is to use PyGrub or PvGrub: dom0 will just hold a version of grub, and use it to load the kernel image from the domU image itself, and thus avoids from having to propagate the kernel upgrade, domU can simply upgrade its kernel and reboot, without even having to destroy the domain. PvGrub is more secure than PyGrub, because PvGrub runs inside domU itself, thus having the same isolation as the target domU, while PyGrub runs inside dom0 and could an eventual compromise would be very concerning.

= Upgrading/transition =
Line 196: Line 342:
  * pygrub in Xen-4.0 will need to be patched as per [[DebianBug:599243|#599243]]   * xen.independent_wallclock sysctl setting is not available for newer squeeze kernels supporting pvops. If you have relied on it, you would have to run ntpd in dom0 and domUs. [[http://syslog.me/2011/01/05/independent-wallclock-in-xen-4|source]]
Line 203: Line 349:
== Note on kernel version compatibility ==

The new 2.6.32 kernel images have [[http://wiki.xensource.com/xenwiki/XenParavirtOps|paravirt_ops-based]] Xen dom0 and domU support.
== Kernel version compatibility ==

In Debian 6.0 (squeeze) the Linux '686-bigmem' and 'amd64' kernel images have [[http://wiki.xensource.com/wiki/XenParavirtOps|paravirt_ops-based]] Xen domU support. From Debian 7 (wheezy) onward, the '686-pae' and 'amd64' kernel images support running as either dom0 or domU, and no Xen-specific kernel is provided.
Line 209: Line 355:

<<Anchor(InstallLenny)>>
= Installation on lenny =

'''lenny or 5.0 is an old stable Debian release. It will cease to receive security or any other kind of support in 2012. If you're installing new machines, don't install lenny, install squeeze (see above)!'''

''This information is preserved only for completeness.''

The short story is the following:

 1. Install Lenny. Finish the installation as you would normally do
 2. apt-get install install xen
 3. apt-get install xen-utils
 4. apt-get install xen-tools
 5. install a xen dom0-capable kernel
 6. reboot into xen
 7. create your domains
 8. manage your domains


== Dom0 (host) ==

After you are done installing the base OS, xen-utils and xen-tools, you will need to install a xen-capable dom0 kernel. The kernel is 2.6.26, the -xen variant contains patches from SuSE for dom0 support.

The xen-linux-system packages of interest are (Install the correct one for your architecture):

 * [[DebianPkg:lenny/xen-linux-system-2.6.26-2-xen-686|xen-linux-system-2.6.26-2-xen-686]] and [[DebianPkg:lenny/xen-linux-system-2.6.26-2-xen-amd64|xen-linux-system-2.6.26-2-xen-amd64]]

=== Serial console access ===
To get output from grub, XEN, the kernel and getty (login prompt) via ''both'' vga and serial console to work, here's an example of the right settings when using Lenny kernels and Xen 3.2:

In {{{/boot/grub/menu.lst}}}:

{{{
serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1
terminal --timeout=5 serial console
[...]
title Xen 3.2-1-amd64 / Debian GNU/Linux, kernel 2.6.26-2-xen-amd64
root (hd0,0)
kernel /boot/xen-3.2-1-amd64.gz com1=9600,8n1 console=com1,vga
module /boot/vmlinuz-2.6.26-2-xen-amd64 root=/dev/md0 ro console=tty0 console=hvc0
module /boot/initrd.img-2.6.26-2-xen-amd64
}}}
In contrast to the Etch configuration, there's no ttyS0 in the vmlinuz line!

In {{{/etc/inittab}}} you need at least these lines:

{{{
1:2345:respawn:/sbin/getty 38400 hvc0
2:23:respawn:/sbin/getty 38400 tty1
# NO getty on ttyS0!
}}}
The tty1 will show up at the vga output, and the hvc0 will show up at the serial console.

== DomU (guests) ==
The Lenny Debian Installer fully supports installation of 32 bit guests under Xen using the netboot/xen variant. Images are available on any Debian mirror in the [[http://ftp.debian.org/debian/dists/lenny/main/installer-i386/current/images/netboot/xen/|installer directory]] and contain a kernel, installer ramdisk and an example Xen configuration file. To install, fetch the [[http://ftp.debian.org/debian/dists/lenny/main/installer-i386/current/images/netboot/xen/xm-debian.cfg|xm-debian.cfg]] configuration file, edit to suit your tastes, and start the guest with the {{{install=true}}} option plus an optional (but strongly recommended) {{{install-mirror=ftp://ftp.XX.debian.org/debian}}}.

{{{
xm create -c xm-debian.cfg install=true install-mirror=ftp://ftp.XX.debian.org/debian
}}}
Newer images are also available from the [[http://people.debian.org/~joeyh/d-i/images/daily/netboot/xen/|daily builds]]. After grabbing the [[http://people.debian.org/~joeyh/d-i/images/daily/netboot/xen/xm-debian.cfg|xm-debian.cfg]] configuration file and editing it to suit your tastes, start the guest with an additional {{{install-installer=http://people.debian.org/~joeyh/d-i/images/daily/}}} to manually direct it to the daily builds:

WARNING, if you do not change the hard disks option on xm-debian.cfg this WILL overwrite your dom0 instead of installing to your domU. YOUR MACHINE WILL BE DESTROYED.

{{{
xm create -c xm-debian.cfg install=true \
  install-mirror=ftp://ftp.XX.debian.org/debian \
  install-installer=http://people.debian.org/~joeyh/d-i/images/daily/
}}}
See the comments in the configuration file for additional installation options.

Another way of creating a lenny domU is the following:
{{{
xen-create-image --hostname=vanila --size=8Gb --dist=lenny --memory=512M --ide --dhcp
}}}
Please note that the --dir option may be required, and it specified the directory where it will store your disk images. If you wish to specify a fixed ip address, use the --ip xxx.xxx.xxx.xxx instead of --dhcp option.

Once the guest is installed simply boot it using:

{{{
xm create -c xm-debian.cfg
}}}
Lenny only includes 32 bit (PAE) kernel support which means there is no installer support for 64 bit guests. You could continue to use the Etch kernels or obtain a newer upstream kernel which supports 64 bit operation (2.6.27+).

In addition to installing via Debian Installer xen-tools can also create a Lenny domU as described below.

The default Lenny kernel is the newer paravirt_ops version (2.6.26), which does not function as a dom0 (except for the -xen variants, which have dom0 support but also some issues running as domU (please clarify?). It will also not support PCI passthrough in a domU. For PCI passthrough, you have to either:
 * run the 2.6.18 etch kernel (as both dom0 and domU), or
 * upgrade to new paravirt_ops-style kernels

In Lenny the distinction between the Xen and non-Xen flavours of the kernel (with respect to domU support) is no longer present. The Debian Installer will install the -686-bigmem flavour.

== Notes on kernel version compatibility ==

 * dom0 as well as domU works on kernel 2.6.26 from Lenny
 * a Lenny dom0 on amd64 can run any domU (Etch or Lenny, i386 or amd64);
 * a Lenny dom0 on i386 can, or should be able to, run any 32-bit domU (Etch or Lenny).

== Possible problems and bugs ==

=== No login prompt when using `xm console` ===

Using a lenny domU, make sure you have {{{hvc0}}} listed in inittab, like {{{1:2345:respawn:/sbin/getty 38400 hvc0}}}. There happened to be a lot of changes of default console unit used by Xen (tty1, xvc0, hvc0 etc) but for a Lenny domU (version > 2.6.26-9) it's {{{hvc0}}}.

=== 'clocksource/0: Time went backwards' ===

If a domU crashes or freezes while uttering the famous lasts words 'clocksource/0: Time went backwards', your domU is likely using the xen clocksource instead of its own clock ticks. In practice, this seems to be the cause of infrequent lockups under load (and/or problems with suspending).

see http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1098

==== workaround #1 ====
A workaround is to decouple the clock in the domU from the dom0:

In your dom0 and domU {{{/etc/sysctl.conf}}} add the line: {{{xen.independent_wallclock=1}}}. On the dom0, edit the configuration file of the domU (e.g. {{{/etc/xen/foobar.cfg}}} and add (or expand) the extra-line: {{{extra="clocksource=jiffies"}}}.

These settings can be activated without rebooting the domU. After editing the configuration files, issue {{{sysctl -p}}} and {{{echo "jiffies"> /sys/devices/system/clocksource/clocksource0/current_clocksource}}} on the domU prompt.

Because the clock won't be relying on the dom0 clock anymore, you probably need to use ntp on the domU to synchronize it properly to the world.

==== workaround #2 ====
Another possibility ist to use the behaviour of the previous xen-kernel settings: clocksource=jiffies and independent_wallclock=0

Setting clocksource=jiffies for the dom0 and each domU as kernel parameter has eliminated the "Time went backwards" for me (14 dom0s and 27 domUs running stable for two weeks).
You can check the values with

cat /sys/devices/system/clocksource/clocksource0/current_clocksource

and

cat /proc/sys/xen/independent_wallclock

With these settings, ntp ist only needed in the dom0. If you change the time in a domU while ntp is running on the according dom0, time will be corrected within a few minutes in the domU.
Hint: I didn't manage to influence the time of the domU with setting the time in the dom0 with date or hwclock, nevertheless ntp seems to do this (http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=534978#29).

==== workaround #3 ====

There are cases where setting the clocksource to jiffies just makes the clock more unstable and leads to continous resets. A working solution appears to be the following:

  * set independent_wallclock to 0 (all domains; VMs will follow dom0's clock)
  * set clocksource to xen (it's the default in lenny)
  * configure ntpd in dom0 only; set "disable kernel" in ntp.conf

This succeeded in stabilizing a Xen server's clock where all other workarounds failed.

More information can be found at http://tinyurl.com/375jza8. You can browse for the whole process at http://tinyurl.com/2veotke

=== domU on lenny using xen-tools ===
xen-tools don't use hvc0 as the console device in /etc/inittab and don't install udev (leading to /dev/pts missing in domU).

This makes logging in via xm console and via ssh impossible, because getty doesn't have a proper console to attach to and ssh can't attach to a pseudo terminal.

To fix this,

1. add to /etc/xen-tools/xen-tools.conf:

{{{
serial_device = hvc0}}}
2. and make domU with:

{{{
xen-create-image --hostname HOSTNAME (more options...) --role udev}}}


= Installation on etch =

'''etch or 4.0 is an obsolete Debian release that doesn't have security or any other kind of support. Use newer releases!'''

'''This information is preserved only for completeness'''

Upstream documentation can be found in the {{{xen-docs-3.0}}} package (in /usr/share/doc/xen-docs-3.0/user.pdf.gz). It's also available [[http://www.cl.cam.ac.uk/research/srg/netos/xen/readmes/user/user.html|online]].

== Dom0 (host) ==
 * Choose and install a {{{xen-linux-system-KERNELVERSION}}} package. This installs the kernel, a hypervisor and matching utilities.
 * On i386, install {{{libc6-xen}}}. This means that you don't have to delete {{{/lib/tls}}} or move it out of the way, as suggested by most Xen guides.
 * Use Grub as bootloader (since Lilo and Xen don't play well with one another)
 * You probably want to configure /etc/xen/xend-config.sxp (especially the network-script scheme).
The xen-linux-system packages of interest are (Install the correct one for your architecture):

 * Etch: [[DebianPkg:etch/xen-linux-system-2.6.18-6-xen-686|xen-linux-system-2.6.18-6-xen-686]] and [[DebianPkg:etch/xen-linux-system-2.6.18-6-xen-amd64|xen-linux-system-2.6.18-6-xen-amd64]].
If you need to apply some modifications to the kernel with the xen patch, then one way to do it is described DebianKernelCustomCompilation.

=== Serial console access ===
To get output from grub, XEN, the kernel and getty (login prompt) via ''both'' vga and serial console to work, here's an example of the right settings when using etch kernels and Xen 3.0.3:

In {{{/boot/grub/menu.lst}}}:

{{{
serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1
terminal --timeout=5 serial console
[...]
title Xen 3.0.3-1-i386-pae / Debian GNU/Linux, kernel 2.6.18-6-xen-686
root (hd0,0)
kernel /boot/xen-3.0.3-1-i386-pae.gz com1=9600,8n1 console=com1,vga
module /boot/vmlinuz-2.6.18-6-xen-686 root=/dev/md0 ro console=tty0 console=ttyS0,9600n8
module /boot/initrd.img-2.6.18-6-xen-686
}}}
In {{{/etc/inittab}}} you need at least these lines:

{{{
1:2345:respawn:/sbin/getty 38400 tty1
T0:23:respawn:/sbin/getty -L ttyS0 9600 vt100
}}}

== DomU (guests) ==
The easiest way to create a domU is to use DebianPkg:xen-tools (and, if this doesn't do what you need, Steve Kemp is keen and fast in implementing useful suggestions).

If you do not wish to use xen-tools, you could use [[http://www.debian.org/releases/stable/i386/apds03.html|this alternative guide]], to setup the system using debootstrap.

Xen boots domUs using kernels stored on dom0, so you only need to install the corresponding linux-modules package in the domU. Alternatively, you can use PyGrub to boot kernels on the domU filesystem.

On i386, make sure you install DebianPkg:libc6-xen.

If you install lenny domU on etch dom0, make sure you read this entry on [[http://wiki.xensource.com/xenwiki/XenFaq#head-e05786f1e0d6a833bc146a6096cab2d96f2b30ae|XenFaq]] when you see messages on the console like {{{4gb seg fixup, process klogd (pid 2075), cs:ip 73:b7e25870}}}. After applying the {{{echo 'hwcap 0 nosegneg' > /etc/ld.so.conf.d/libc6-xen.conf && ldconfig}}}, in the dom0 system, reboot, or, if you don't like rebooting (which requires you to stop domU's), restart all processes mentioned in the log messages (e.g. {{{/etc/init.d/ssh restart}}}, {{{init q}}}, etc..)

== Notes on kernel version compatibility ==

In general:

 * dom0 works on kernels 2.6.18 from Etch, but not with kernel 2.6.24 from Etch-n-half!
 * domU should work with both kernels 2.6.18 and 2.6.24
 * an Etch dom0 with 2.6.18-*-xen can only run 32-bit domU when it's i386 itself
 * a 64-bit Etch dom0 using the amd64 kernel can run a 64-bits domU
 * a 64-bit Etch dom0 can also run a 32-bit domU, but ''only'' when using the amd64-kernel and a 32-bit userland!

For those who want to test the 2.6.32 kernel domU on an earlier dom0, you have to make sure that the xen-blkfront domU driver is loaded, and can find the root and other disk partitions. This is no longer the case if you still use the deprecated hda* or sda* device names in domU .cfg files. Switch to xvda* devices, which also work with 2.6.18 and 2.6.26 dom0 kernels.

There are also the backward-looking options:
 * Use DebianLenny's 2.6.26, which has forward-ported Xen 2.6.18 dom0 kernel code
 * Use custom 2.6.30 kernels with forward-ported Xen 2.6.18 dom0 kernel code, see <<BR>>{{{}}} http://lists.alioth.debian.org/pipermail/pkg-xen-devel/2009-July/002356.html


== Possible problems and bugs ==
=== error: CDROM boot failure ===

/!\ ToDo: was this etch-only?

You get the error :

 . {{{
CDROM boot failure code 0002
or CDROM boot failure code 0003
Boot from cd-Rom failed
Fatal: Could not read the boot disk.
}}}
That's because Xen can't boot from a cdrom iso image at the moment. i.e you can't have {{{tap:aio:/path/to/mycd.iso,hdc:cdrom,r}}} ''or'' {{{file:/path/to/mycd.iso,hdc:cdrom,r}}}.

Workaround: use losetup to create a loopback device for the cdrom ISO image, then use it in Xen configuration file. for example :

 . {{{
#First, check which loop device is free
$losetup -f
/dev/loop9
#Then create a loopback device
$losetup -f /path/to/mycd.iso
losetup /dev/loop9
/dev/loop9: [fe04]:3096598 (/path/to/mycd.iso)
}}}
Now you can use /dev/loop9 in xen configuration file (/etc/xen/foobar.cfg) :

 . {{{
...
disk = [ 'phy:/dev/vg1/xpsp3,ioemu:hda,w', 'phy:/dev/loop/0,ioemu:hdc:cdrom,r' ]
...
}}}
then boot/install the guest OS.

note: yo should switch back to the {{{tap:aio:/path/to/mycd.iso,hdc:cdrom,r}}} syntax after installation, since loop back have to be recreated after you reboot the host system.

=== 4gb seg fixup errors ===

/!\ ToDo: was this etch-only?

Solution:

{{{
echo 'hwcap 0 nosegneg' > /etc/ld.so.conf.d/libc6-xen.conf && ldconfig
}}}
Read this [[http://wiki.xensource.com/xenwiki/XenFaq#head-e05786f1e0d6a833bc146a6096cab2d96f2b30ae|XenFaq entry]] for more info.

=== NUMA with xen 3.4 ===

/!\ ToDo: was this etch-only?

In order to activate NUMA awareness in the hypervisor on multi-socket AMD and Intel hosts, use the following:

acpi=on numa=on

by default, NUMA is off.


= Using Debian-Installer =
The page [[DebianInstaller/Xen]] contains instructions on how to install Xen Dom0 and Etch DomU with DebianInstaller.See above for details of installing Lenny using Debian Installer.
= Older releases =

[[Xen Installation on lenny|Xen Installation on Debian 5.0 ( Lenny )]]

[[Xen Installation on etch |Xen Installation on Debian 4.0 ( Etch )]]

The page [[DebianInstaller/Xen]] contains instructions on how to install Xen Dom0 and Etch DomU with DebianInstaller.
Line 507: Line 368:
= Common Errors = = Common errors =
Line 517: Line 379:
== Hangs on boot on system with >32G RAM ==
System with >32G RAM can hang on boot after "system has x VCPUS" and before "Scrubbing Free RAM". This is due to a limitation of the paravirt ops domain 0 kernel in Squeeze which prevents it from using more than 32G.

/!\ ToDo: bug report number?

Add "dom0_mem=32G" to your hypervisor command line to work around this issue.

The remaining RAM will still be available for guest use!

For example, edit /etc/default/grub and edit the variable:

{{{
GRUB_CMDLINE_XEN="dom0_mem=32G"
}}}
Line 533: Line 380:
You need to configure some basic networking between dom0 and domU. Edit /etc/xen/xend-config.sxp

{{{
#(network-script network-dummy)
(network-script network-bridge)
}}}
for a basic bridge networking, and restart xend.

== "Error: Bootloader isn't executable" ==

/!\ ToDo: was this lenny-only?

The above, rather cryptic, error (when starting a domU using xen-utils/xm create) is due to xen-utils not being able to find PyGrub. Modify your xm-debian.cfg config file to use the absolute directory (ie. bootloader="/usr/lib/xen-3.2-1/bin/pygrub" instead of bootloader="pygrub") and your domU should boot up fine.

== "ERROR (XendCheckpoint:144) Save failed on domain mydomu32 (X)." ==

/!\ ToDo: was this lenny-only?

xm save/migration of a 32-Bit domU on a 64-Bit dom0 fails. It seems this is not supported with linux-image-2.6.26-2-xen-amd64 (http://readlist.com/lists/lists.xensource.com/xen-users/4/24225.html). One workaround is to use a 64-Bit Hypervisor with a 32-Bit dom0 (http://lists.xensource.com/archives/html/xen-users/2008-12/msg00404.html).
See also DebianBug:526695

== "network routing for hvm guests:" ==
=== ERROR in /var/log/xen/qemu-dm-[.*].log: ===
=== bridge xenbr0 does not exist! ===
=== /etc/xen/scripts/qemu-ifup: could not launch network script ===

When using routing instead of bridging there seems to be problems for hvm guests. Here a very bad hack for it:
prerequsites:

in "/etc/xen/xend-config.sxp"

 . {{{
(network-script 'network-route netdev=<ethX,internet_you_want_to_use>')
(vif-script vif-route)
 }}}

in your domU config file

 . {{{
...
vif = [ 'type=ioemu, mac=00:16:3e:XX:XX:XX, vifname=vif-<domU-name>, ip=<domU-ip>, bridge=<ethX,nic_you_want_to_use>' ]
...
 }}}

than:

In "/etc/xen/scripts/qemu-ifup" disable with a #

 . {{{
# brctl addif $2 $1
 }}}

insert

 . {{{
gwip=`ip -4 -o addr show primary dev "$2" | awk '$3 == "inet" {print $4;exit}'| sed 's#/.*##'`
ip link set "$1" up arp on
ip addr add $gwip dev "$1"
   }}}

after starting you domU

 . {{{
ip route show

ip route del <domU-ip> dev vif-<domU-name>

ip addr show (should show a tap device with your <dom0-IP of the ethX,nic_you_want_to_use>)

ip route add to <domU-ip> via <dom0-IP of the ethX,nic_you_want_to_use> dev tapX
 }}}

pretty bad but works...



== "network bridging for xen 4.0 with multiple interfaces:" ==
see Bug http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=591456

Using xen 4.0 under squeeze I had trouble to let my domUs use a specific nic in dom0.
The bugfix above might not be fully sufficient to solve it, here is what you have to do:

1.) in set /etc/xen/xend-config.sxp "(network-script network-bridge-wrapper)"

2.) create /etc/xen/scriptsnetwork-bridge-wrapper like this (don't forget to chmod 755):

. {{{

#(!)/bin/bash
# next two lines were good for xen-3.2.1 not for xen-4.0x anymore
#/etc/xen/scripts/network-bridge netdev=eth0 bridge=xenbr0 start
#/etc/xen/scripts/network-bridge netdev=eth1 bridge=xenbr1 start

# this works for xen-4.0x
# xen-utils-common in squeeze don't produce this script (yet) which is needed

if [ ! -f ../scripts/hotplugpath.sh ];then
        echo -e "SBINDIR=\"/usr/sbin\"
BINDIR=\"/usr/bin\"
LIBEXEC=\"/usr/lib/xen/bin\"
LIBDIR=\"/usr/lib\"
SHAREDIR=\"/usr/share\"
PRIVATE_BINDIR=\"/usr/lib/xen/bin\"
XENFIRMWAREDIR=\"/usr/lib/xen/boot\"
XEN_CONFIG_DIR=\"/etc/xen\"
XEN_SCRIPT_DIR=\"/etc/xen/scripts\"" > /etc/xen/scripts/hotplugpath.sh
        chown root:root /etc/xen/scripts/hotplugpath.sh
        chmod 755 /etc/xen/scripts/hotplugpath.sh
fi

/etc/xen/scripts/network-bridge netdev=eth0 start

# if you want to bind a NIC in domU to another interface in dom0 (bridging mode) than:
# 1.) list all dom0 interfaces you want to be able to use (except your eth0!) in "more_bridges" below
# 2.) in the domU config use: vif = [ 'mac=00:16:3e:xx:xx:xx, bridge=ethX' ] with ethX being the original device of dom0 that this domU should use
# 3.) using bridging, all interfaces in dom0 that you want to use have to be valid configured BEFORE you run this script, i.e. before starting xend the first time.
# (use ping -I ethX <target your gateway> to CHECK THAT BEFORE, and don't blame me if u plugged the cable into the wrong nic port ;-)
# 4.) remember, in the background xen does move the link to another name, creates a new interface etc etc... we don't care about this here, it just works fine for now

# here I want to prepare to other nics that I can choose from in the domU configs
more_bridges="eth1 eth2"

for i in $more_bridges; do
        ip addr show dev $i | egrep inet > /dev/null 2>&1
        if [ $? == 0 ];then
                ip link set $i down
                /etc/xen/scripts/network-bridge netdev=$i start
                ip link set $i up
        else
                echo -e "\nFailed to set up a bridge!\nYour device $i in dom0 seems not to be configured, so I won't try to use it as part of a bridge for any domU\n"
        fi
done

 }}}

I tested this, it worked and had no side effects on the first glance, still there is no guarantee ;-)


== "XENBUS: Device with no driver: device/vbd/..." ==

/!\ ToDo: was this lenny-only?

This means you do not have xen-blkfront/xen-blkback driver loaded.

If you're upgrading from 2.6.26.x (or any other old version) to 2.6.32.x domU kernel,
update-initramfs (running in 2.6.26.x environment) fails to recognize the need for
xen-*front modules and will not include these in initrd image, causing reboot to fail.

OTOH, if you do have xen-*.ko modules in initrd image, this message can be ignored.
Drivers will be loaded in later stage automatically.

You need to configure some basic networking between dom0 and domU.

The recommended way to do this is to configure bridging in `/etc/networking/interfaces`. See [[BridgeNetworkConnections]] and/or the Xen wiki page [[http://wiki.xen.org/wiki/Host Configuration/Networking|Host Configuration/Networking]] for details.

 . {i} Note: The use of the 'network-script' option in /etc/xen/xend-config.sxp is no longer recommended.

== 'clocksource/0: Time went backwards' ==

If a domU crashes or freezes while uttering the famous lasts words 'clocksource/0: Time went backwards' see [[Xen/Clocksource]].

== Error "unknown compression format" ==

The Xen system may fail to load a domU kernel, reporting the error:

{{{
ERROR Invalid kernel: xc_dom_probe_bzimage_kernel: unknown compression format
    xc_dom_bzimageloader.c:394: panic: xc_dom_probe_bzimage_kernel: unknown compression format
}}}

This indicates that your toolstack is not able to cope with the compression scheme used by the kernel binary. Most commonly this occurs with newer kernels which use xz compression when booting on older Xen installations. Debian switched to xz compression from package version 3.6.8-1~experimental.1 onwards.

Xz is supported by the version of Xen in Debian 7 (Wheezy) onwards. If you are running Debian guests on a non-Debian host then you will need to consult the non-Debian host's provider.

See also:

 * Bug [[DebianBug:727736]]
 * [[http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UserProvidedKernels.html|User provided kernels in EC2|]]

== Packetloss / txqueuelen 32 ==
If you experience a high packet loss check your vif interfaces for the qlen (in ifconfig output it's called txqueuelen).
The default is 32 but physical interfaces and the interface you'll have in your domU will usually use something much greater like 1000. You can change the setting with
{{{
ip link set qlen 1000 dev vif-mydomU
}}}

It's documented in several places but the default is still unchanged:

http://djlab.com/2011/05/dropped-vif-tx-packets-on-xenserver/

http://wiki.xen.org/wiki/Network_Throughput_and_Performance_Guide
Line 696: Line 434:
  
Line 699: Line 436:
 * Homepage :http://www.xen.org
 * (quite basic and low-level) Upstream Documentation is on package {{{xen-docs-3.0}}} package (in /usr/share/doc/xen-docs-3.0/user.pdf.gz). It's also available [[http://xen.org/xen/documentation.html|online]].
 * Xen Wiki :
  * [[http://wiki.xensource.com/xenwiki/Debian|Debian]]
  * [[http://wiki.xensource.com/xenwiki/XenFaq|XenFaq]]
  * [[http://wiki.xensource.com/xenwiki/XenNetworking|Xen Networking]]
  * [[http://wiki.xensource.com/xenwiki/XenNetworkingExamples|Xen Networking Examples]]
  * [[http://wiki.xensource.com/xenwiki/XenBestPractices|Best Practices for Xen]]

 * Xen Homepage: [[http://www.xen.org]]
 * Basic (and low-level) upstream Documentation is [[http://xenbits.xen.org/docs/|here]]. Includes:
  * [[http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html|Xen Hypervisor Command Line Options]]
 * Xen [[http://wiki.xen.org/|Wiki]]:
  * [[http://wiki.xen.org/wiki/Category:Debian|Category:Debian]] contains documents relating to Debian
  * [[http://wiki.xen.org/wiki/Category:Manual|Category:Manual]]
  * [[http://wiki.xen.org/wiki/Xen_Man_Pages|Xen Man Pages]]
  * [[http://wiki.xen.org/wiki/Host_Configuration/Networking|Host Configuration/Networking]]
  * [[http://wiki.xen.org/wiki/Category:FAQ|XenFaq]]
  * [[http://wiki.xen.org/wiki/XenBestPractices|Best Practices for Xen]]
Line 710: Line 450:
 * Two-way migration between Xen and KVM is described here: [[HowToMigrateBackAndForthBetweenXenAndKvm]]
Line 712: Line 453:
CategoryNetwork ## This page is referenced within linux-image-amd64.NEWS and linux-image-686-pae.NEWS of the linux-latest source package (since version 60)
CategoryPermalink | CategoryVirtualization | CategorySoftware

Translation(s): Indonesian - Español - Português (Brasil)

(!) /Discussion


Xen overview

Xen is an open-source (GPL) type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host)

Some of Xen's key features are:

  • Small footprint and interface (is around 1MB in size). Because Xen uses a microkernel design, with a small memory footprint and limited interface to the guest, it is more robust and secure than other hypervisors.
  • Operating system agnostic: Most installations run with Linux as the main control stack (aka "domain 0"). But a number of other operating systems can be used instead, including NetBSD and ?OpenSolaris.

  • Driver Isolation: Xen has the capability to allow the main device driver for a system to run inside of a virtual machine. If the driver crashes, or is compromised, the VM containing the driver can be rebooted and the driver restarted without affecting the rest of the system.
  • Paravirtualization: Fully paravirtualized guests have been optimized to run as a virtual machine. This allows the guests to run much faster than with hardware extensions (HVM). Additionally, Xen can run on hardware that doesn't support virtualization extensions.

See the Xen Overview on the Xen wiki for more information.

Guest types

Xen supports running two different types of guests: Paravirtualization (PV) and Full or Hardware assisted Virtualization (HVM). Both guest types can be used at the same time on a single Xen system. It is also possible to use techniques used for Paravirtualization in an HVM guest: essentially creating a continuum between PV and HVM. This approach is called PV on HVM. Again see the Xen Overview on the Xen wiki for more information.

Domain 0

Xen has a special domain called domain 0 which contains drivers for the hardware, as well as the toolstack to control VMs. Domain 0 is often referred to as dom0.

Domain 0 (host) installation

Initial installation

Before installing Xen you should install Debian on the host machine. This installation will form the basis of Domain 0.

Installing Debian can be done in the usual way using the DebianInstaller. See the Debian Release Notes for more information on installing Debian.

In order to install Xen you will either a 32-bit PC (i386) or 64-bit PC (amd64) installation of Debian. Although it is recommended to always run a 64-bit hypervisor note that this does not mean one has to run a 64-bit domain 0. It is quite common to run a 32-bit domain 0 on a 64-bit hypervisor (a so-called "32on64" configuration).

In general you can install your Domain 0 Debian as you would any other Debian install. The main thing to consider is the partition layout since this will have an impact on the disk configurations available to the guests. The Xen wiki has some Host OS Installation Considerations which may be of interest. To paraphrase that source: if your Domain 0 Debian system will be primarily used to run guests, a good rule of thumb is to set aside 4GB for the domain 0 root filesystem (/) and some swap space (swap=RAM if RAM<=2GB; swap=2GB if RAM>2GB). The swap space should be determined by the amount of RAM provided to Dom0, see Configure Domain 0 Memory

Use the rest of the disk space for a LVM physical volume.

If you have one disk, the following is a reasonable setup: create 3 physical partitions: sda1, sda2, sda3. The root (ext4) and swap will be on the first two and the remainder will be under Logical Volume Management (lvm). With the LVM setup, create 1 physical volume and then one volume group. Give the volume group a name, such as `vg0'.

Installing Xen packages

The Xen and debootstrap software in Squeeze (Debian 6.0) are very much newer than that in Lenny. Because of that, working with Xen becomes a lot easier.

The setup described here is tested for Debian Squeeze and Ubuntu Maverick virtual machines, but should work for a lot more.

First install the hypervisor, xen aware kernel and xen tools. This can be done by a metapackage:

apt-get install xen-linux-system

Since Debian Wheezy, it's better to install this metapackage :

apt-get install xen-system

Checking for hardware HVM support

Hardware-assisted virtualization, requires CPU support for either the extension: AMD Secure Virtual Machine (AMD Virtualisation; AMD-V); or Intel Virtual Machine Extensions (VT-x).

On your intended host system, you can run this command:

egrep '(vmx|svm)' /proc/cpuinfo

On squeeze (but not on wheezy), providing the necessary emulation infrastructure for an HVM guest, the qemu device model package is also required:

apt-get install xen-qemu-dm-4.0

Prioritize booting Xen over native Linux

Buster

Is not required, package set the priority booting

Stretch

A patch could be required, if you use a not english localized server. More information on https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=865086

Wheezy / Squeeze

Debian Wheezy and Squeeze use Grub 2 whose default is to list normal kernels first, and only then list the Xen hypervisor and its kernels.

You can change this to cause Grub to prefer to boot Xen by changing the priority of Grub's Xen configuration script (20_linux_xen) to be higher than the standard Linux config (10_linux). This is most easily done using dpkg-divert:

dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen

to undo this:

dpkg-divert --rename --remove /etc/grub.d/20_linux_xen

After any update to the Grub configuration you must apply the configuration by running:

update-grub

Networking

In order to give network access to guest domains it is necessary to configure the domain 0 network appropriately. The most common configuration is to use a software bridge.

It is recommended that you manage your own network bridge using the Debian network bridge. The Xen wiki page Host Configuration/Networking also has some useful information. The Xen supplied network scripts are not always reliable and will be removed from a later version. They are disabled by default in Debian's packages.

If you have a router that assigns ip addresses through dhcp, the following is a working example of the /etc/network/interfaces file using bridge-utils software.

#The loopback network interface
auto lo
iface lo inet loopback

iface eth0 inet manual

auto xenbr0
iface xenbr0 inet dhcp
   bridge_ports eth0

#other possibly useful options in a virtualized environment
  #bridge_stp off       # disable Spanning Tree Protocol
  #bridge_waitport 0    # no delay before a port becomes available
  #bridge_fd 0          # no forwarding delay

## configure a (separate) bridge for the DomUs without giving Dom0 an IP on it
#auto xenbr1
#iface xenbr1 inet manual
#   bridge_ports eth1

Other configuration tweaks

Domain 0 memory

By default on a Xen system the majority of the hosts memory is assigned to dom0 on boot and dom0's size is dynamically modified ("ballooned") automatically in order to accommodate new guests which are started.

However on a system which is dedicated to running Xen guests it is better to instead give dom0 some static amount of RAM and to disable ballooning.

The following examples use 1024M.

In order to do this you must first add the dom0_mem option to your hypervisor command line. This is done by editing /etc/default/grub and adding

# Xen boot parameters for all Xen boots
GRUB_CMDLINE_XEN="dom0_mem=1024M,max:1024M"

at the bottom of the file.

Note : On servers with huge memory, Xen kernel crash. You must set a dom0 memory limit. Take care on Wheezy, 1024M is not enough and cause kernel crash at boot with out-of-memory message.

Remember to apply the change to the grub configuration by running update-grub!

Then edit /etc/xen/xend-config.sxp to configure the toolstack to match by changing the following settings:

(dom0-min-mem 1024)
(enable-dom0-ballooning no)

With the new xl toolstack, edit /etc/xen/xl.conf and disable autoballoon with autoballoon="0"

At this point you should reboot so that these changes take effect.

Domain 0 CPUs

There are some useful tweaks of dom0 cpu utilization.

By default all CPUs are shared among dom0 and all domU (guests). It may broke dom0 responsibility if guests consume too much CPU time. To avoid this, it is possible to grant one (or more) processor core to dom0 and also pin it to dom0.

Add following options to /etc/default/grub to allocate one cpu core to dom0:

dom0_max_vcpus=1 dom0_vcpus_pin

Make such changes in /etc/xen/xend-config.sxp:

(dom0-cpus 1)

Guest behaviour on host reboot

By default, when Xen dom0 shuts down or reboots, it tries to save (i.e. hibernate) the state of the domUs. Sometimes there are problems with that - it could fail because of a lack of disk space in /var, or because of random software bugs. Because it is also clean to just have the VMs shutdown upon host shutdown, if you want you can make sure they get shut down normally by setting these parameters in /etc/default/xendomains:

XENDOMAINS_RESTORE=false
XENDOMAINS_SAVE=""

Boot parameters

You may also want to pass some boot parameters to Xen when starting up in normal or recovery mode. Add these variables to /etc/default/grub to achieve this:

# Xen boot parameters for all Xen boots
GRUB_CMDLINE_XEN="something"
# Xen boot parameters for non-recovery Xen boots (in addition to GRUB_CMDLINE_XEN)
GRUB_CMDLINE_XEN_DEFAULT="something else"

Remember to apply the change to the grub configuration by running update-grub!

More information on the available hypervisor command line options can be found in the upstream documentation.

PCI pass-through parameters

This information is incomplete for Squeeze and needs to be updated for Wheezy

To enable PCI pass-through, you need to know the BDF (Bus, Device, Function) id of the device. This is obtained through the lspci command, with the output containing the BDF in the format: (BB:DD.F) at the start of the line. To hide a device from Dom0 you will need to pass these boot parameters to Xen when starting. For example if using a Dom0 with 512M of memory and two devices at 01:08.1 and 01:09.2, add these variables to /etc/default/grub to achieve this:

# Xen boot parameters for all Xen boots
GRUB_CMDLINE_XEN="dom0_mem=512M pciback.hide=(01:08.1)(01:09.2)"
# Xen boot parameters for non-recovery Xen boots (in addition to GRUB_CMDLINE_XEN)
GRUB_CMDLINE_XEN_DEFAULT="something else"

for Squeeze use "pciback.hide" (kernels < 2.6.32.10), for Wheezy (I have not tested this yet) use "xen-pciback.hide"

for Squeeze you need to pass all of the devices on the bus, eg to pass any device on the 01:DD.F bus, you have pass all of them: (01:08.1)(01:09.2)(01:09.3)etc.

Remember to apply the change to the grub configuration by running update-grub!

At least in Wheezy (not tested in Squeeze) the xen-pciback module needs to be configured through modprobe.conf and added to the initramfs additionally.

Configure the xen-pciback module by adding a modprobe include file (e.g. /etc/modprobe.d/xen-pciback.conf) with the following content (given that the PCI device would be assigned to module e1000e normally):

install e1000e /sbin/modprobe xen-pciback; /sbin/modprobe --first-time --ignore-install e1000e
options xen-pciback hide=(0000:03:00.0) 

Add the xen-pciback module to initramfs by adding it to /etc/initramfs/modules and running update-initramfs -u afterwards.

Please note that pci-passthrough is broken when MSI are enabled (default) in Linux kernels < 3.14. Use Linux kernel >= 3.14 in DomU/VM or set pci=nomsi for DomU/VM kernel as workaround. See the following thread for detailed information: http://thread.gmane.org/gmane.comp.emulators.xen.user/81944/focus=191437

Serial console

To get output from GRUB, the Xen hypervisor, the kernel and getty (login prompt) via both VGA and serial console to work, here's an example of the right settings on squeeze:

Edit /etc/default/grub and add:

GRUB_SERIAL_COMMAND="serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1"
GRUB_TERMINAL="console serial"
GRUB_TIMEOUT=5
GRUB_CMDLINE_XEN="com1=9600,8n1 console=com1,vga"
GRUB_CMDLINE_LINUX="console=tty0 console=hvc0"

Here's what I used to configure the serial console (for a Supermicro X8STi-F motherboard with IPMI and SOL):

GRUB_CMDLINE_XEN="loglvl=all guest_loglvl=all com1=115200,8n1,0x3e8,5 console=com1,vga"
GRUB_CMDLINE_LINUX="console=hvc0 earlyprintk=xen"

In /etc/inittab you need at least these lines:

1:2345:respawn:/sbin/getty 38400 hvc0
2:23:respawn:/sbin/getty 38400 tty1
# NO getty on ttyS0!

With systemd, you do not have an /etc/inittab any more. systemd will spawn a getty on /dev/hvc0 if you specify console=hvc0 on the kernel command line.

This way, tty1 will show up at the VGA output, and the hvc0 will show up at the serial console.

To keep both Xen and dom0 kernel output on the same tty, just omit the "vga"-related settings from the above setup.

If you need to debug Xen and see a crash dump of the kernel, you can do it using IPMITool if your server has SOL:

ipmitool -I lanplus -H server-ip-address -U your-username sol activate | tee my-log-file.txt

DomU (guest) installation

Using xen-tools

xen-tools is a set of scripts which can easily create fully configured Xen guest domains.

Once you have installed dom0 you can install xen-tools on your host with:

apt-get install xen-tools

To configure xen-tools, you can edit /etc/xen-tools/xen-tools.conf which contains default values that the xen-create-image script will use. The xen-create-image(8) manual page contains information on the available options.

To give a different path where the domU images being saved and enable the superuser password in the initial build, we will edit the /etc/xen-tools/xen-tools.conf file and uncomment this lines:

dir = /home/xen/
passwd = 1

Then you can create virtual machines with this command:

xen-create-image --hostname <hostname> --ip <ip> --vcpus 2 --pygrub --dist <lenny|squeeze|maverick|whatever>

To start the created VM run the command:

xl create /etc/xen/<hostname>.cfg

To erase a VM image (even the main directory) run the command:

xen-delete-image VMs_name

Possible problems and bugs

  • If your domU kernel happens to miss support for the xvda* disk devices (the xen-blkfront driver), use the --scsi option that makes the VM use normal SCSI HD names like sda*. You can also set scsi=1 in /etc/xen-tools/xen-tools.conf to make this the default.

  • When using xen-tools with --role on Squeeze be aware of #588783: 'Should mount /dev/pts when creating image'. This is fixed in Wheezy.

Using Debian Installer

The Xen wiki page Debian Guest Installation Using DebianInstaller contains instructions on how to install Xen DomU from Lenny onwards using ?Debian Installer.

Booting guests

The Xen wiki page PvGrub2 contains instructions on how to use PV Grub2 with PV guests from Jessie onwards.

Upgrading domU kernels

A trivial way to boot a domU is to use a kernel in /boot of dom0. Unfortunately, this means that to upgrade the kernel used by a domU, one needs to synchronize with the dom0 kernel. I.e. if the dom0 kernel is upgraded, then the domU will not be able to load its kernel modules since the version will be different. Conversely, if the kernel is upgraded inside the domU, i.e. its kernel modules get upgraded, then the dom0 kernel will not fit either.

One way is to upgrade the kernel inside the domU, and propagate by hand the kernel image into a separate directory of dom0, where the configuration for the domU will look for it.

A better way is to use PyGrub or PvGrub: dom0 will just hold a version of grub, and use it to load the kernel image from the domU image itself, and thus avoids from having to propagate the kernel upgrade, domU can simply upgrade its kernel and reboot, without even having to destroy the domain. PvGrub is more secure than PyGrub, because PvGrub runs inside domU itself, thus having the same isolation as the target domU, while PyGrub runs inside dom0 and could an eventual compromise would be very concerning.

Upgrading/transition

See also: Debian Release Notes

Upgrading a server to Squeeze that uses both Lenny Dom0 and DomU's is fairly straightforward. There are a few catches that one needs to be aware of however: Reference

  • Dom0 Issues

    • The Xen packages will not upgrade themselves. They must be manually removed and the latest Xen packages must be installed from the Debian Squeeze repository through apt.
    • xen.independent_wallclock sysctl setting is not available for newer squeeze kernels supporting pvops. If you have relied on it, you would have to run ntpd in dom0 and domUs. source

  • DomU Issues

    • A Squeeze DomU will not be able to boot on the Xen-3.2 package supplied by Lenny because this older version will not support grub2. A Lenny DomU can be upgraded to Squeeze while running on a Lenny Dom0 but it will not be able to be booted until the Dom0 has been upgraded to the Xen-4.0 packages.
    • The entries added to chain load grub1 to grub2 will not allow pygrub to find the correct partition. Before rebooting a freshly upgraded Squeeze DomU, make sure to rename or remove /boot/grub/menu.lst. This will force pygrub to look for the /boot/grub/grub.cfg file which will be in the correct format.
    • A qcow image created with qcow-create and the BACKING_FILENAME option on Lenny will not be able to boot on Squeeze because the ability to use qcow images as backing files has been removed in Xen versions after 3.2. Also, if you try to boot such an image on Squeeze, Xen will silently convert the qcow images L1 table to big endian (you'll find "Converting image to big endian L1 table" in the logfiles), effectively rendering the image unusable on both Squeeze and Lenny!

Kernel version compatibility

In Debian 6.0 (squeeze) the Linux '686-bigmem' and 'amd64' kernel images have paravirt_ops-based Xen domU support. From Debian 7 (wheezy) onward, the '686-pae' and 'amd64' kernel images support running as either dom0 or domU, and no Xen-specific kernel is provided.

When you create an image for a modern Debian or Ubuntu domU machine, it will include a kernel that has pv_ops domU support, it will therefore not use a Xen kernel, but the "stock" one, as it is capable of running on Xen's hypervisor.

Older releases

?Xen Installation on Debian 5.0 ( Lenny )

?Xen Installation on Debian 4.0 ( Etch )

The page DebianInstaller/Xen contains instructions on how to install Xen Dom0 and Etch DomU with DebianInstaller.

Package maintenance

Debian's Xen packages are maintained by the pkg-xen project. (developers' mailing list)

The Debian Developer's Package Overview page lists source packages that are maintained by the team.

Common errors

dom0 automatic reboots

  • {i} Note: if Xen is crashing and reboot automatically, you may want to use noreboot xen option, to prevent it from rebooting automatically.

Edit /etc/default/grub and add the "noreboot" option to GRUB_CMDLINE_XEN, for example:

GRUB_CMDLINE_XEN="noreboot"

Error "Device ... (vif) could not be connected"

You need to configure some basic networking between dom0 and domU.

The recommended way to do this is to configure bridging in /etc/networking/interfaces. See BridgeNetworkConnections and/or the Xen wiki page Host Configuration/Networking for details.

  • {i} Note: The use of the 'network-script' option in /etc/xen/xend-config.sxp is no longer recommended.

'clocksource/0: Time went backwards'

If a domU crashes or freezes while uttering the famous lasts words 'clocksource/0: Time went backwards' see Xen/Clocksource.

Error "unknown compression format"

The Xen system may fail to load a domU kernel, reporting the error:

ERROR Invalid kernel: xc_dom_probe_bzimage_kernel: unknown compression format
    xc_dom_bzimageloader.c:394: panic: xc_dom_probe_bzimage_kernel: unknown compression format

This indicates that your toolstack is not able to cope with the compression scheme used by the kernel binary. Most commonly this occurs with newer kernels which use xz compression when booting on older Xen installations. Debian switched to xz compression from package version 3.6.8-1~experimental.1 onwards.

Xz is supported by the version of Xen in Debian 7 (Wheezy) onwards. If you are running Debian guests on a non-Debian host then you will need to consult the non-Debian host's provider.

See also:

Packetloss / txqueuelen 32

If you experience a high packet loss check your vif interfaces for the qlen (in ifconfig output it's called txqueuelen). The default is 32 but physical interfaces and the interface you'll have in your domU will usually use something much greater like 1000. You can change the setting with

ip link set qlen 1000 dev vif-mydomU

It's documented in several places but the default is still unchanged:

http://djlab.com/2011/05/dropped-vif-tx-packets-on-xenserver/

http://wiki.xen.org/wiki/Network_Throughput_and_Performance_Guide

PV drivers on HVM guest

It may be possible to build the PV drivers for use on HVM guests. These drivers are called unmodified_drivers and are part of the xen-unstable.hg repository. You can fetch the repository using mercurial thus:

  •   hg clone http://xenbits.xen.org/xen-unstable.hg

The drivers reside under xen-unstable.hg/unmodified_drivers/linux-2.6. The README in this directory gives compilation instructions.

A somewhat dated, detailed set of instructions for building these drivers can be found here:

http://wp.colliertech.org/cj/?p=653

Resources


CategoryPermalink | CategoryVirtualization | CategorySoftware