Differences between revisions 2 and 213 (spanning 211 versions)
Revision 2 as of 2005-11-26 22:55:00
Size: 1573
Editor: PeMac
Comment:
Revision 213 as of 2012-05-28 15:05:05
Size: 17061
Editor: ?IanCampbell
Comment: Add initial Debian install to the dom0 install section, break the dom0 section into subsections.
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
Modern computers are sufficiently powerful to use virtualization to present the illusion of many smaller virtual machines (VMs), each running a separate operating system instance. Successful partitioning of a machine to support the concurrent execution of multiple operating systems poses several challenges. Firstly, virtual machines must be isolated from one another: it is not acceptable for the execution of one to adversely affect the performance of another. This is particularly true when virtual machines are owned by mutually untrusting users. Secondly, it is necessary to support a variety of different operating systems to accommodate the heterogeneity of popular applications. Thirdly, the performance overhead introduced by virtualization should be small.

Xen is a virtual machine monitor for x86 that supports execution of multiple guest operating systems with unprecedented levels of performance and resource isolation. Xen is Open Source software, released under the terms of the GNU General Public License. We have a fully functional ports of Linux 2.4 and 2.6 running over Xen, and regularly use it for running demanding applications like MySQL, Apache and PostgreSQL. Any Linux distribution (RedHat, SuSE, Debian, Mandrake) should run unmodified over the ported OS.

In addition to Linux, members of Xen's user community have contributed or are working on ports to other operating systems such as NetBSD (Christian Limpach), FreeBSD (Kip Macy) and Plan 9 (Ron Minnich).


 * http://www.cl.cam.ac.uk/Research/SRG/netos/xen/

See also: ["Qemu"]
#languages en
||<tablewidth="100%"style="border: 0px hidden ;">~-Translation(s): [[id/Xen|Indonesian]] -~ ||<style="border: 0px hidden ; text-align: right;"> (!) [[/Discussion]] ||
----
 . <<TableOfContents(2)>>

= Xen Overview =
Xen is an open-source (GPL) type-1 or baremetal [[http://en.wikipedia.org/wiki/Hypervisor|hypervisor]], which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host)

Some of Xen's key features are:
 * Small footprint and interface (is around 1MB in size). Because Xen uses a microkernel design, with a small memory footprint and limited interface to the guest, it is more robust and secure than other hypervisors.
 * Operating system agnostic: Most installations run with Linux as the main control stack (aka "domain 0"). But a number of other operating systems can be used instead, including NetBSD and OpenSolaris.
 * Driver Isolation: Xen has the capability to allow the main device driver for a system to run inside of a virtual machine. If the driver crashes, or is compromised, the VM containing the driver can be rebooted and the driver restarted without affecting the rest of the system.
 * Paravirtualization: Fully paravirtualized guests have been optimized to run as a virtual machine. This allows the guests to run much faster than with hardware extensions (HVM). Additionally, Xen can run on hardware that doesn't support virtualization extensions.

See the [[http://wiki.xen.org/wiki/Xen_Overview|Xen Overview]] on the Xen wiki for more information.

== Guest types ==

Xen supports running two different types of guests: Paravirtualization (PV) and Full or Hardware assisted Virtualization (HVM). Both guest types can be used at the same time on a single Xen system. It is also possible to use techniques used for Paravirtualization in an HVM guest: essentially creating a continuum between PV and HVM. This approach is called PV on HVM. Again see the [[http://wiki.xen.org/wiki/Xen_Overview|Xen Overview]] on the Xen wiki for more information.

== Domain 0 ==

Xen has a special domain called domain 0 which contains drivers for the hardware, as well as the toolstack to control VMs. Domain 0 is often referred to as `dom0`.

= Domain 0 (Host) Installation =

== Initial Host Installation ==

Before installing Xen you should install Debian on the host machine. This installation will form the basis of Domain 0.

Installing Debian can be done in the usual way using the [[DebianInstaller]]. See the [[http://www.debian.org/releases/stable/releasenotes|Debian Release Notes]] for more information on installing Debian.

In order to install Xen you will either a [[http://www.debian.org/releases/stable/i386/release-notes/|32-bit PC (i386)]] or [[http://www.debian.org/releases/stable/amd64/release-notes/|64-bit PC (amd64)]] installation of Debian. Although it is recommended to always run a 64-bit hypervisor note that this does not mean one has to run a 64-bit domain 0. It is quite common to run a 32-bit domain 0 on a 64-bit hypervisor (a so-called "32on64" configuration).

In general you can install your Domain 0 Debian as you would any other Debian install. However the Xen wiki has some [[http://wiki.xen.org/wiki/Host_OS_Install_Considerations|Host OS Installation Considerations]] which may be of interest. The main thing to consider is the partition layout of the host since this will have an impact on the available guest disk configurations.

If you have already installed Debian then continue on to the next section.

== Installing Xen Packages ==

The Xen and debootstrap software in Squeeze (Debian 6.0) are very much newer than that in Lenny. Because of that, working with Xen becomes a lot easier.

The setup described here is tested for Debian Squeeze and Ubuntu Maverick virtual machines, but should work for a lot more.

First install the hypervisor, xen aware kernel and xen tools. This can be done by a metapackage:

{{{
apt-get install xen-linux-system
}}}

To get Xen HVM support on Wheezy the qemu device model package, which provides the necessary emulation infrastructure for an HVM guest, is also required.
{{{
apt-get install xen-qemu-dm-4.0
}}}
This is no longer needed in Wheezy since the device model is part of the Xen packages.

== Prioritise Booting Xen Over Native ==

Debian Squeeze uses Grub 2 whose default is to list normal kernels first, and only then list the Xen hypervisor and its kernels.

You can change the default kernel to boot in two ways:

 * Change the priority of GRUB's Xen configuration script ({{{20_linux_xen}}}) to be higher than the standard Linux config ({{{10_linux}}}):

{{{
dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen
}}}

to undo this:

{{{
dpkg-divert --rename --remove /etc/grub.d/20_linux_xen
}}}

 * Modify the value of GRUB_DEFAULT in the file /etc/default/grub. The value is an integer starting at 0 representing the order of the menuentry item to boot. You can see a list of all menuentry values in order by typing

{{{
grep menuentry /boot/grub/grub.cfg
}}}

Count to the number of the first Xen kernel, starting at 0, and enter it in the default file. Note that this procedure is not robust against the installation or removal of new kernel versions which will cause the Xen entries to get different numbers, for this reason reordering the priorities using {{{dpkg-divert}}} as above is preferable.

After either of these procedures, do an update to the GRUB configuration:

{{{
update-grub
}}}

If you wish to improve this process there is an open bug discussing the default values after a Xen installation. [[DebianBug:603832|#603832]]

== Configure Networking ==

In order to give network access to guest domains it is necessary to configure the domain 0 network appropriately. The most common configuration is to use a software bridge.

It is recommended that you manage your own network bridge using the [[BridgeNetworkConnections|Debian network bridge]]. The Xen wiki page [[http://wiki.xen.org/wiki/Host Configuration/Networking|Host Configuration/Networking]] also has some useful information. The Xen supplied network scripts are not always reliable and will be removed from a later version.

== Other configuration tweaks ==

=== Configure Boot Parameters ===

You may also want to pass some boot parameters to Xen when starting up in normal or recovery mode. Add these variables to `/etc/default/grub` to achieve this:
{{{
# Xen boot parameters for all Xen boots
GRUB_CMDLINE_XEN="something"
# Xen boot parameters for non-recovery Xen boots (in addition to GRUB_CMDLINE_XEN)
GRUB_CMDLINE_XEN_DEFAULT="something else"
}}}

After editing GRUB configuration, you must apply it by running:
{{{
update-grub
}}}

=== Configure Domain 0 Memory ===

The `/etc/xen/xend-config.sxp` config file has options to set the memory and CPU usage for your dom0, which you might want to change.
To reduce dom0 memory usage as it boots, use the dom0_mem kernel option in the aforementioned GRUB_CMDLINE_XEN variable. Xen wiki also advise to disable dom0 memory ballooning and set minimal memory in /etc/xen/xend-config.sxp (1024M is an example) :

{{{
(dom0-min-mem 1024)
(enable-dom0-ballooning no)
}}}

=== Disable OS probing ===

To avoid getting boot entries for each virtual machine you install on a volume group, disable the GRUB OS prober.

/!\ ToDo: does this problem still happen and under what circumstances? Bug number?

Note that if you are running a computer with multi-boot with for example Windows, this will also remove the entries for it, which might not be what you wish for.

Edit /etc/default/grub and add:
{{{
# Disable OS prober to prevent virtual machines on logical volumes from appearing in the boot menu.
GRUB_DISABLE_OS_PROBER=true
}}}

=== Configure guest behaviour on host reboot ===

By default, when Xen dom0 shuts down or reboots, it tries to save the state of the domUs. Sometimes there are problems with that - it could fail because of a lack of disk space in /var, or because of random software bugs. Because it is also clean to just have the VMs shutdown upon host shutdown, if you want you can make sure they get shut down normally by setting these parameters in /etc/default/xendomains:

{{{
XENDOMAINS_RESTORE=false
XENDOMAINS_SAVE=""
}}}

=== Enable Serial Console ===

To get output from GRUB, the Xen hypervisor, the kernel and getty (login prompt) via ''both'' VGA and serial console to work, here's an example of the right settings on squeeze:

Edit {{{/etc/default/grub}}} and add:

{{{
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1"
GRUB_TERMINAL="console serial"
GRUB_TIMEOUT=5
GRUB_CMDLINE_XEN="com1=9600,8n1 console=com1,vga"
GRUB_CMDLINE_LINUX="console=tty0 console=hvc0"
}}}

Here's what I used to configure the serial console (for a Supermicro X8STi-F motherboard with IPMI and SOL):

{{{
GRUB_CMDLINE_XEN="loglvl=all guest_loglvl=all com1=115200,8n1,0x3e8,5 console=com1,vga"
GRUB_CMDLINE_LINUX="console=hvc0 earlyprintk=xen"
}}}

In {{{/etc/inittab}}} you need at least these lines:

{{{
1:2345:respawn:/sbin/getty 38400 hvc0
2:23:respawn:/sbin/getty 38400 tty1
# NO getty on ttyS0!
}}}

This way, tty1 will show up at the VGA output, and the hvc0 will show up at the serial console.

To keep both Xen and dom0 kernel output on the same tty, just omit the "vga"-related settings from the above setup.

If you need to debug Xen and see a crash dump of the kernel, you can do it using IPMITool if your server has SOL:

ipmitool -I lanplus -H server-ip-address -U your-username sol activate | tee my-log-file.txt

= Installation as a DomU (guest) =

== Using xen-tools ==

DebianPkg:xen-tools is a set of scripts which can easily create fully configured Xen guest domains.

Once you have installed dom0 you can install xen-tools on your host with:
{{{
apt-get install xen-tools
}}}

Then you can create virtual machines with this command:
{{{
xen-create-image --hostname <hostname> --ip <ip> --vcpus 2 --pygrub --dist <lenny|squeeze|maverick|whatever>
}}}

To configure xen-tools, you can edit `/etc/xen-tools/xen-tools.conf` which contains default values that the xen-create-image script will use. The xen-create-image(8) manual page contains information on the available options.

=== Possible problems and bugs ===

 * If your domU kernel happens to miss support for the `xvda*` disk devices (the `xen-blkfront` driver), use the `--scsi` option that makes the VM use normal SCSI HD names like `sda*`. You can also set `scsi=1` in `/etc/xen-tools/xen-tools.conf` to make this the default.

 * When using xen-tools with ''--role'' on Squeeze be aware of [[DebianBug:588783|#588783]]: 'Should mount /dev/pts when creating image'. This is fixed in Wheezy.

== Using Debian Installer ==

The Xen wiki page [[http://wiki.xen.org/wiki/Debian_Guest_Installation_Using_Debian_Installer|Debian Guest Instalation Using DebianInstaller]] contains instructions on how to install Xen DomU from Lenny onwards using [[Debian Installer]].

= Upgrading/transition =

See also: [[http://www.debian.org/releases/stable/amd64/release-notes/ch-information.en.html#xen-upgrades|Debian Release Notes]]

Upgrading a server to Squeeze that uses both Lenny Dom0 and DomU's is fairly straightforward. There are a few catches that one needs to be aware of however: [[http://net153.net/blog/20101217-13-14.html|Reference]]

 * ~+Dom0 Issues+~
  * The Xen packages will not upgrade themselves. They must be manually removed and the latest Xen packages must be installed from the Debian Squeeze repository through apt.
  * pygrub in Xen-4.0 will need to be patched as per [[DebianBug:599243|#599243]]
  * xen.independent_wallclock sysctl setting is not available for newer squeeze kernels supporting pvops. If you have relied on it, you would have to run ntpd in dom0 and domUs. [[http://my.opera.com/marcomarongiu/blog/2011/01/05/independent-wallclock-in-xen-4|source]]

 * ~+DomU Issues+~
  * A Squeeze DomU will not be able to boot on the Xen-3.2 package supplied by Lenny because this older version will not support grub2. A Lenny DomU can be upgraded to Squeeze while running on a Lenny Dom0 but it will not be able to be booted until the Dom0 has been upgraded to the Xen-4.0 packages.
  * The entries added to chain load grub1 to grub2 will not allow pygrub to find the correct partition. Before rebooting a freshly upgraded Squeeze DomU, make sure to rename or remove /boot/grub/menu.lst. This will force pygrub to look for the /boot/grub/grub.cfg file which will be in the correct format.
  * A qcow image created with qcow-create and the BACKING_FILENAME option on Lenny will not be able to boot on Squeeze because the ability to use qcow images as backing files has been removed in Xen versions after 3.2. Also, if you try to boot such an image on Squeeze, Xen will silently convert the qcow images L1 table to big endian (you'll find "Converting image to big endian L1 table" in the logfiles), effectively rendering the image unusable on both Squeeze and Lenny!

== Note on kernel version compatibility ==

The new 2.6.32 kernel images have [[http://wiki.xensource.com/xenwiki/XenParavirtOps|paravirt_ops-based]] Xen dom0 and domU support.

When you create an image for a modern Debian or Ubuntu domU machine, it will include a kernel that has pv_ops domU support, it will therefore not use a Xen kernel, but the "stock" one, as it is capable of running on Xen's hypervisor.

== Possible problems and bugs ==

= Older Releases =

[[Xen_Installation_on_lenny|Xen Installation on Debian 5.0 ( Lenny )]]

[[Xen_Installation_on_etch |Xen Installation on Debian 4.0 ( Etch )]]

The page [[DebianInstaller/Xen]] contains instructions on how to install Xen Dom0 and Etch DomU with DebianInstaller.

= Package maintenance =
Debian's Xen packages are maintained by the [[http://alioth.debian.org/projects/pkg-xen/|pkg-xen]] project. ([[http://lists.alioth.debian.org/mailman/listinfo/pkg-xen-devel|developers' mailing list]])

The [[http://qa.debian.org/developer.php?login=pkg-xen-devel@lists.alioth.debian.org|Debian Developer's Package Overview]] page lists source packages that are maintained by the team.

= Common Errors =

== dom0 automatic reboots ==
 . {i} Note: if Xen is crashing and reboot automatically, you may want to use {{{noreboot}}} xen option, to prevent it from rebooting automatically.

Edit /etc/default/grub and add the "noreboot" option to GRUB_CMDLINE_XEN, for example:

{{{
GRUB_CMDLINE_XEN="noreboot"
}}}

== Error "Device ... (vif) could not be connected" ==

You need to configure some basic networking between dom0 and domU.

The recommended way to do this is to configure bridging in `/etc/networking/interfaces`. See [[BridgeNetworkConnections]] and/or the Xen wiki page [[http://wiki.xen.org/wiki/Host Configuration/Networking|Host Configuration/Networking]] for details.

 . {i} Note: The use of the 'network-script' option in /etc/xen/xend-config.sxp is no longer recommended.

== 'clocksource/0: Time went backwards' ==

If a domU crashes or freezes while uttering the famous lasts words 'clocksource/0: Time went backwards' see [[Xen/Clocksource]].

= PV drivers on HVM guest =

It may be possible to build the PV drivers for use on HVM guests. These drivers are called unmodified_drivers and are part of the xen-unstable.hg repository. You can fetch the repository using mercurial thus:

 . {{{
  hg clone http://xenbits.xen.org/xen-unstable.hg
}}}
The drivers reside under xen-unstable.hg/unmodified_drivers/linux-2.6. The README in this directory gives compilation instructions.

A somewhat dated, detailed set of instructions for building these drivers can be found here:

http://wp.colliertech.org/cj/?p=653

= Resources =

 * Xen Homepage: [[http://www.xen.org]]
 * Basic (and low-level) upstream Documentation is [[http://xenbits.xen.org/docs/|here]]
 * Xen [[http://wiki.xen.org/|Wiki]]:
  * [[http://wiki.xen.org/wiki/Category:Debian|Category:Debian]] contains documents relating to Debian
  * [[http://wiki.xen.org/wiki/Category:Manual|Category:Manual]]
  * [[http://wiki.xen.org/wiki/Xen_Man_Pages|Xen Man Pages]]
  * [[http://wiki.xen.org/wiki/Host_Configuration/Networking|Host Configuration/Networking]]
  * [[http://wiki.xen.org/wiki/Category:FAQ|XenFaq]]
  * [[http://wiki.xen.org/wiki/XenBestPractices|Best Practices for Xen]]
 * German Wiki on Xen: http://www.xen-info.de/wiki
 * Additional information required:
  * Compiling a custom Xen DomU kernel. (e.g. adding tun device)
 * Two-way migration between Xen and KVM is described here: [[HowToMigrateBackAndForthBetweenXenAndKvm]]
 * [[http://www2.fh-lausitz.de/launic/comp/xen/101212.xen4_update_vm|Script, notes to migrate para-vm to xen-4.0]]

Translation(s): ?Indonesian

(!) ?/Discussion


Xen Overview

Xen is an open-source (GPL) type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host)

Some of Xen's key features are:

  • Small footprint and interface (is around 1MB in size). Because Xen uses a microkernel design, with a small memory footprint and limited interface to the guest, it is more robust and secure than other hypervisors.
  • Operating system agnostic: Most installations run with Linux as the main control stack (aka "domain 0"). But a number of other operating systems can be used instead, including NetBSD and ?OpenSolaris.

  • Driver Isolation: Xen has the capability to allow the main device driver for a system to run inside of a virtual machine. If the driver crashes, or is compromised, the VM containing the driver can be rebooted and the driver restarted without affecting the rest of the system.
  • Paravirtualization: Fully paravirtualized guests have been optimized to run as a virtual machine. This allows the guests to run much faster than with hardware extensions (HVM). Additionally, Xen can run on hardware that doesn't support virtualization extensions.

See the Xen Overview on the Xen wiki for more information.

Guest types

Xen supports running two different types of guests: Paravirtualization (PV) and Full or Hardware assisted Virtualization (HVM). Both guest types can be used at the same time on a single Xen system. It is also possible to use techniques used for Paravirtualization in an HVM guest: essentially creating a continuum between PV and HVM. This approach is called PV on HVM. Again see the Xen Overview on the Xen wiki for more information.

Domain 0

Xen has a special domain called domain 0 which contains drivers for the hardware, as well as the toolstack to control VMs. Domain 0 is often referred to as dom0.

Domain 0 (Host) Installation

Initial Host Installation

Before installing Xen you should install Debian on the host machine. This installation will form the basis of Domain 0.

Installing Debian can be done in the usual way using the DebianInstaller. See the Debian Release Notes for more information on installing Debian.

In order to install Xen you will either a 32-bit PC (i386) or 64-bit PC (amd64) installation of Debian. Although it is recommended to always run a 64-bit hypervisor note that this does not mean one has to run a 64-bit domain 0. It is quite common to run a 32-bit domain 0 on a 64-bit hypervisor (a so-called "32on64" configuration).

In general you can install your Domain 0 Debian as you would any other Debian install. However the Xen wiki has some Host OS Installation Considerations which may be of interest. The main thing to consider is the partition layout of the host since this will have an impact on the available guest disk configurations.

If you have already installed Debian then continue on to the next section.

Installing Xen Packages

The Xen and debootstrap software in Squeeze (Debian 6.0) are very much newer than that in Lenny. Because of that, working with Xen becomes a lot easier.

The setup described here is tested for Debian Squeeze and Ubuntu Maverick virtual machines, but should work for a lot more.

First install the hypervisor, xen aware kernel and xen tools. This can be done by a metapackage:

apt-get install xen-linux-system

To get Xen HVM support on Wheezy the qemu device model package, which provides the necessary emulation infrastructure for an HVM guest, is also required.

apt-get install xen-qemu-dm-4.0

This is no longer needed in Wheezy since the device model is part of the Xen packages.

Prioritise Booting Xen Over Native

Debian Squeeze uses Grub 2 whose default is to list normal kernels first, and only then list the Xen hypervisor and its kernels.

You can change the default kernel to boot in two ways:

  • Change the priority of GRUB's Xen configuration script (20_linux_xen) to be higher than the standard Linux config (10_linux):

dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen

to undo this:

dpkg-divert --rename --remove /etc/grub.d/20_linux_xen
  • Modify the value of GRUB_DEFAULT in the file /etc/default/grub. The value is an integer starting at 0 representing the order of the menuentry item to boot. You can see a list of all menuentry values in order by typing

grep menuentry /boot/grub/grub.cfg

Count to the number of the first Xen kernel, starting at 0, and enter it in the default file. Note that this procedure is not robust against the installation or removal of new kernel versions which will cause the Xen entries to get different numbers, for this reason reordering the priorities using dpkg-divert as above is preferable.

After either of these procedures, do an update to the GRUB configuration:

update-grub

If you wish to improve this process there is an open bug discussing the default values after a Xen installation. #603832

Configure Networking

In order to give network access to guest domains it is necessary to configure the domain 0 network appropriately. The most common configuration is to use a software bridge.

It is recommended that you manage your own network bridge using the Debian network bridge. The Xen wiki page Host Configuration/Networking also has some useful information. The Xen supplied network scripts are not always reliable and will be removed from a later version.

Other configuration tweaks

Configure Boot Parameters

You may also want to pass some boot parameters to Xen when starting up in normal or recovery mode. Add these variables to /etc/default/grub to achieve this:

# Xen boot parameters for all Xen boots
GRUB_CMDLINE_XEN="something"
# Xen boot parameters for non-recovery Xen boots (in addition to GRUB_CMDLINE_XEN)
GRUB_CMDLINE_XEN_DEFAULT="something else"

After editing GRUB configuration, you must apply it by running:

update-grub

Configure Domain 0 Memory

The /etc/xen/xend-config.sxp config file has options to set the memory and CPU usage for your dom0, which you might want to change. To reduce dom0 memory usage as it boots, use the dom0_mem kernel option in the aforementioned GRUB_CMDLINE_XEN variable. Xen wiki also advise to disable dom0 memory ballooning and set minimal memory in /etc/xen/xend-config.sxp (1024M is an example) :

(dom0-min-mem 1024)
(enable-dom0-ballooning no)

Disable OS probing

To avoid getting boot entries for each virtual machine you install on a volume group, disable the GRUB OS prober.

/!\ ToDo: does this problem still happen and under what circumstances? Bug number?

Note that if you are running a computer with multi-boot with for example Windows, this will also remove the entries for it, which might not be what you wish for.

Edit /etc/default/grub and add:

# Disable OS prober to prevent virtual machines on logical volumes from appearing in the boot menu.
GRUB_DISABLE_OS_PROBER=true

Configure guest behaviour on host reboot

By default, when Xen dom0 shuts down or reboots, it tries to save the state of the domUs. Sometimes there are problems with that - it could fail because of a lack of disk space in /var, or because of random software bugs. Because it is also clean to just have the VMs shutdown upon host shutdown, if you want you can make sure they get shut down normally by setting these parameters in /etc/default/xendomains:

XENDOMAINS_RESTORE=false
XENDOMAINS_SAVE=""

Enable Serial Console

To get output from GRUB, the Xen hypervisor, the kernel and getty (login prompt) via both VGA and serial console to work, here's an example of the right settings on squeeze:

Edit /etc/default/grub and add:

GRUB_SERIAL_COMMAND="serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1"
GRUB_TERMINAL="console serial"
GRUB_TIMEOUT=5
GRUB_CMDLINE_XEN="com1=9600,8n1 console=com1,vga"
GRUB_CMDLINE_LINUX="console=tty0 console=hvc0"

Here's what I used to configure the serial console (for a Supermicro X8STi-F motherboard with IPMI and SOL):

GRUB_CMDLINE_XEN="loglvl=all guest_loglvl=all com1=115200,8n1,0x3e8,5 console=com1,vga"
GRUB_CMDLINE_LINUX="console=hvc0 earlyprintk=xen"

In /etc/inittab you need at least these lines:

1:2345:respawn:/sbin/getty 38400 hvc0
2:23:respawn:/sbin/getty 38400 tty1
# NO getty on ttyS0!

This way, tty1 will show up at the VGA output, and the hvc0 will show up at the serial console.

To keep both Xen and dom0 kernel output on the same tty, just omit the "vga"-related settings from the above setup.

If you need to debug Xen and see a crash dump of the kernel, you can do it using IPMITool if your server has SOL:

ipmitool -I lanplus -H server-ip-address -U your-username sol activate | tee my-log-file.txt

Installation as a DomU (guest)

Using xen-tools

xen-tools is a set of scripts which can easily create fully configured Xen guest domains.

Once you have installed dom0 you can install xen-tools on your host with:

apt-get install xen-tools

Then you can create virtual machines with this command:

xen-create-image --hostname <hostname> --ip <ip> --vcpus 2 --pygrub --dist <lenny|squeeze|maverick|whatever>

To configure xen-tools, you can edit /etc/xen-tools/xen-tools.conf which contains default values that the xen-create-image script will use. The xen-create-image(8) manual page contains information on the available options.

Possible problems and bugs

  • If your domU kernel happens to miss support for the xvda* disk devices (the xen-blkfront driver), use the --scsi option that makes the VM use normal SCSI HD names like sda*. You can also set scsi=1 in /etc/xen-tools/xen-tools.conf to make this the default.

  • When using xen-tools with --role on Squeeze be aware of #588783: 'Should mount /dev/pts when creating image'. This is fixed in Wheezy.

Using Debian Installer

The Xen wiki page Debian Guest Instalation Using DebianInstaller contains instructions on how to install Xen DomU from Lenny onwards using ?Debian Installer.

Upgrading/transition

See also: Debian Release Notes

Upgrading a server to Squeeze that uses both Lenny Dom0 and DomU's is fairly straightforward. There are a few catches that one needs to be aware of however: Reference

  • Dom0 Issues

    • The Xen packages will not upgrade themselves. They must be manually removed and the latest Xen packages must be installed from the Debian Squeeze repository through apt.
    • pygrub in Xen-4.0 will need to be patched as per #599243

    • xen.independent_wallclock sysctl setting is not available for newer squeeze kernels supporting pvops. If you have relied on it, you would have to run ntpd in dom0 and domUs. source

  • DomU Issues

    • A Squeeze DomU will not be able to boot on the Xen-3.2 package supplied by Lenny because this older version will not support grub2. A Lenny DomU can be upgraded to Squeeze while running on a Lenny Dom0 but it will not be able to be booted until the Dom0 has been upgraded to the Xen-4.0 packages.
    • The entries added to chain load grub1 to grub2 will not allow pygrub to find the correct partition. Before rebooting a freshly upgraded Squeeze DomU, make sure to rename or remove /boot/grub/menu.lst. This will force pygrub to look for the /boot/grub/grub.cfg file which will be in the correct format.
    • A qcow image created with qcow-create and the BACKING_FILENAME option on Lenny will not be able to boot on Squeeze because the ability to use qcow images as backing files has been removed in Xen versions after 3.2. Also, if you try to boot such an image on Squeeze, Xen will silently convert the qcow images L1 table to big endian (you'll find "Converting image to big endian L1 table" in the logfiles), effectively rendering the image unusable on both Squeeze and Lenny!

Note on kernel version compatibility

The new 2.6.32 kernel images have paravirt_ops-based Xen dom0 and domU support.

When you create an image for a modern Debian or Ubuntu domU machine, it will include a kernel that has pv_ops domU support, it will therefore not use a Xen kernel, but the "stock" one, as it is capable of running on Xen's hypervisor.

Possible problems and bugs

Older Releases

Xen Installation on Debian 5.0 ( Lenny )

Xen Installation on Debian 4.0 ( Etch )

The page ?DebianInstaller/Xen contains instructions on how to install Xen Dom0 and Etch DomU with DebianInstaller.

Package maintenance

Debian's Xen packages are maintained by the pkg-xen project. (developers' mailing list)

The Debian Developer's Package Overview page lists source packages that are maintained by the team.

Common Errors

dom0 automatic reboots

  • {i} Note: if Xen is crashing and reboot automatically, you may want to use noreboot xen option, to prevent it from rebooting automatically.

Edit /etc/default/grub and add the "noreboot" option to GRUB_CMDLINE_XEN, for example:

GRUB_CMDLINE_XEN="noreboot"

Error "Device ... (vif) could not be connected"

You need to configure some basic networking between dom0 and domU.

The recommended way to do this is to configure bridging in /etc/networking/interfaces. See BridgeNetworkConnections and/or the Xen wiki page Host Configuration/Networking for details.

  • {i} Note: The use of the 'network-script' option in /etc/xen/xend-config.sxp is no longer recommended.

'clocksource/0: Time went backwards'

If a domU crashes or freezes while uttering the famous lasts words 'clocksource/0: Time went backwards' see ?Xen/Clocksource.

PV drivers on HVM guest

It may be possible to build the PV drivers for use on HVM guests. These drivers are called unmodified_drivers and are part of the xen-unstable.hg repository. You can fetch the repository using mercurial thus:

  •   hg clone http://xenbits.xen.org/xen-unstable.hg

The drivers reside under xen-unstable.hg/unmodified_drivers/linux-2.6. The README in this directory gives compilation instructions.

A somewhat dated, detailed set of instructions for building these drivers can be found here:

http://wp.colliertech.org/cj/?p=653

Resources