Differences between revisions 134 and 135
Revision 134 as of 2010-12-31 20:46:03
Size: 33454
Editor: Sam
Comment:
Revision 135 as of 2011-01-14 21:33:42
Size: 33485
Editor: Sam
Comment:
Deletions are marked like this. Additions are marked like this.
Line 309: Line 309:
Upgrading a server to Squeeze that uses both Lenny Dom0 and DomU's is fairly straight forward however there are a few catches: [[http://net153.net/blog/20101217-13-14.html|Reference]] Upgrading a server to Squeeze that uses both Lenny Dom0 and DomU's is fairly straight forward. There are a few catches that one needs to be aware of however: [[http://net153.net/blog/20101217-13-14.html|Reference]]

Translation(s): Indonesian

(!) /Discussion


Xen Overview

Modern computers are sufficiently powerful to use virtualization to present the illusion of many smaller virtual machines (VMs), each running a separate operating system instance. Successful partitioning of a machine to support the concurrent execution of multiple operating systems poses several challenges. Firstly, virtual machines must be isolated from one another: it is not acceptable for the execution of one to adversely affect the performance of another. This is particularly true when virtual machines are owned by mutually untrusting users. Secondly, it is necessary to support a variety of different operating systems to accommodate the heterogeneity of popular applications. Thirdly, the performance overhead introduced by virtualization should be small.

Xen is a virtual machine monitor for x86 that supports execution of multiple guest operating systems with unprecedented levels of performance and resource isolation. Xen is Open Source software, released under the terms of the GNU General Public License. We have a fully functional ports of Linux 2.6 running over Xen, and regularly use it for running demanding applications like MySQL, Apache and PostgreSQL. Any Linux distribution (RedHat, SuSE, Debian, Mandrake) should run unmodified over the ported OS.

In addition to Linux, members of Xen's user community have contributed or are working on ports to other operating systems such as NetBSD (Christian Limpach), FreeBSD (Kip Macy) and Plan 9 (Ron Minnich).

Different types of virtualization offered by Xen

There are two different types of virtualization offered by Xen:

  • Para-virtulization and
  • Hardware-supported virtualization

Para-virtualization

A term used to describe a virtualization technique that allows the operating system to be aware that it is running on a hypervisor instead of base hardware. The operating system must be modified to accommodate the unique situation of running on a hypervisor instead of basic hardware.

Hardware Virtual Machine

A term used to describe an operating system that is running in a virtualized environment unchanged and unaware that it is not running directly on the hardware. Special hardware is required to allow this, thus the term HVM.

(Source: What is Xen Hypervisor, www.xen.org)

Compatibility

  • dom0 works on kernels 2.6.18 from Etch and 2.6.26 from Lenny, but not with kernel 2.6.24 from Etch-n-half;
  • domU should work with all kernels (2.6.18 and 2.6.24 from Etch and 2.6.26 from Lenny);
  • a Lenny dom0 on amd64 can run any domU (Etch or Lenny, i386 or amd64);
  • a Lenny dom0 on i386 can, or should be able to, run any 32-bit domU (Etch or Lenny).
  • an Etch dom0 (2.6.18-*-xen) can only run 32-bit domU when it's i386 itself, a 64-bit Etch dom0 (using the amd64 kernel) can run a 64-bits domU and also a 32-bit domU, but only when using the amd64-kernel and a 32-bit userland!

Installation on lenny

The short of the story is the following:

  1. Install Lenny. Finish the installation as you would normally do
  2. apt-get install install xen
  3. apt-get install xen-utils
  4. apt-get install xen-tools
  5. install a xen dom0-capable kernel
  6. reboot into xen
  7. create your domains
  8. manage your domains

Dom0 (host)

After you are done installing the base OS, xen-utils and xen-tools, you will need to install a xen-capable dom0 kernel. The kernel is 2.6.26, the -xen variant contains patches from SuSE for dom0 support.

The xen-linux-system packages of interest are (Install the correct one for your architecture):

Serial console access

To get output from grub, XEN, the kernel and getty (login prompt) via both vga and serial console to work, here's an example of the right settings when using Lenny kernels and Xen 3.2:

In /boot/grub/menu.lst:

serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1
terminal --timeout=5 serial console
[...]
title           Xen 3.2-1-amd64 / Debian GNU/Linux, kernel 2.6.26-2-xen-amd64
root            (hd0,0)
kernel          /boot/xen-3.2-1-amd64.gz com1=9600,8n1 console=com1,vga
module          /boot/vmlinuz-2.6.26-2-xen-amd64 root=/dev/md0 ro console=tty0 console=hvc0
module          /boot/initrd.img-2.6.26-2-xen-amd64

In contrast to the Etch configuration, there's no ttyS0 in the vmlinuz line!

In /etc/inittab you need at least these lines:

1:2345:respawn:/sbin/getty 38400 hvc0
2:23:respawn:/sbin/getty 38400 tty1
# NO getty on ttyS0!

The tty1 will show up at the vga output, and the hvc0 will show up at the serial console.

DomU (guest)

The Lenny Debian Installer fully supports installation of 32 bit guests under Xen using the netboot/xen variant. Images are available on any Debian mirror in the installer directory and contain a kernel, installer ramdisk and an example Xen configuration file. To install, fetch the xm-debian.cfg configuration file, edit to suit your tastes, and start the guest with the install=true option plus an optional (but strongly recommended) install-mirror=ftp://ftp.XX.debian.org/debian.

xm create -c xm-debian.cfg install=true install-mirror=ftp://ftp.XX.debian.org/debian

Newer images are also available from the daily builds. After grabbing the xm-debian.cfg configuration file and editing it to suit your tastes, start the guest with an additional install-installer=http://people.debian.org/~joeyh/d-i/images/daily/ to manually direct it to the daily builds:

WARNING, if you do not change the hard disks option on xm-debian.cfg this WILL overwrite your dom0 instead of installing to your domU. YOUR MACHINE WILL BE DESTROYED.

xm create -c xm-debian.cfg install=true \
  install-mirror=ftp://ftp.XX.debian.org/debian \
  install-installer=http://people.debian.org/~joeyh/d-i/images/daily/

See the comments in the configuration file for additional installation options.

Another way of creating a lenny domu is the following:

xen-create-image --hostname=vanila --size=8Gb --dist=lenny --memory=512M --ide --dhcp

Please note that the --dir option may be required, and it specified the directory where it will store your disk images. If you wish to specify a fixed ip address, use the --ip xxx.xxx.xxx.xxx instead of --dhcp option.

Once the guest is installed simply boot it using:

xm create -c xm-debian.cfg

Lenny only includes 32 bit (PAE) kernel support which means there is no installer support for 64 bit guests. You can continue to use the Etch kernels or obtain a newer upstream kernel which supports 64 bit operation (2.6.27+).

In addition to installing via Debian Installer xen-tools can also create a Lenny domU as described in the Etch section above.

The default Lenny kernel is the newer paravirt_ops version (2.6.26), which does not function as a dom0 (except for the -xen variants, which have dom0 support but also some issues running as domU (please clarify?). It will also not support PCI passthrough in a domU. For PCI passthrough, you have to run the 2.6.18 etch kernel as both dom0 and domU.

In Lenny the distinction between the Xen and non-Xen flavours of the kernel (with respect to domU support) is no longer present. The Debian Installer will install the -686-bigmem flavour.

Additional note for domU on lenny using xen-tools

xen-tools don't use hvc0 as the console device in /etc/inittab and don't install udev (leading to /dev/pts missing in domU).

This makes logging in via xm console and via ssh impossible, because getty doesn't have a proper console to attach to and ssh can't attach to a pseudo terminal.

To fix this,

1. add to /etc/xen-tools/xen-tools.conf:

serial_device = hvc0

2. and make domU with:

xen-create-image --hostname HOSTNAME (more options...) --role udev

Installation on etch

Upstream documentation can be found in the xen-docs-3.0 package (in /usr/share/doc/xen-docs-3.0/user.pdf.gz). It's also available online.

Dom0 (host)

  • Choose and install a xen-linux-system-KERNELVERSION package. This installs the kernel, a hypervisor and matching utilities.

  • On i386, install libc6-xen. This means that you don't have to delete /lib/tls or move it out of the way, as suggested by most Xen guides.

  • Use Grub as bootloader (since Lilo and Xen don't play well with one another)
  • You probably want to configure /etc/xen/xend-config.sxp (especially the network-script scheme).

The xen-linux-system packages of interest are (Install the correct one for your architecture):

If you need to apply some modifications to the kernel with the xen patch, then one way to do it is described ?DebianKernelCustomCompilation.

Serial console access

To get output from grub, XEN, the kernel and getty (login prompt) via both vga and serial console to work, here's an example of the right settings when using etch kernels and Xen 3.0.3:

In /boot/grub/menu.lst:

serial --unit=0 --speed=9600 --word=8 --parity=no --stop=1
terminal --timeout=5 serial console
[...]
title           Xen 3.0.3-1-i386-pae / Debian GNU/Linux, kernel 2.6.18-6-xen-686
root            (hd0,0)
kernel          /boot/xen-3.0.3-1-i386-pae.gz com1=9600,8n1 console=com1,vga
module          /boot/vmlinuz-2.6.18-6-xen-686 root=/dev/md0 ro console=tty0 console=ttyS0,9600n8
module          /boot/initrd.img-2.6.18-6-xen-686

In /etc/inittab you need at least these lines:

1:2345:respawn:/sbin/getty 38400 tty1
T0:23:respawn:/sbin/getty -L ttyS0 9600 vt100

DomU (guests)

The easiest way to create a domU is to use xen-tools (and, if this doesn't do what you need, Steve Kemp is keen and fast in implementing useful suggestions).

If you do not wish to use xen-tools, you could use this alternative guide, to setup the system using debootstrap.

Xen boots domUs using kernels stored on dom0, so you only need to install the corresponding linux-modules package in the domU. Alternatively, you can use PyGrub to boot kernels on the domU filesystem.

On i386, make sure you install libc6-xen.

If you install lenny domU on etch dom0, make sure you read this entry on XenFaq when you see messages on the console like 4gb seg fixup, process klogd (pid 2075), cs:ip 73:b7e25870. After applying the echo 'hwcap 0 nosegneg' > /etc/ld.so.conf.d/libc6-xen.conf && ldconfig, in the dom0 system, reboot, or, if you don't like rebooting (which requires you to stop domU's), restart all processes mentioned in the log messages (e.g. /etc/init.d/ssh restart, init q, etc..)

Xen on Testing/Squeeze and on Unstable/Sid as Dom0, to create a multitude of DomU's

The Xen and debootstrap software in Squeeze (testing release) and Sid (Unstable) are very much newer than that in Lenny. Because of that, working with Xen becomes a lot easier. If you are going to create a host that is merely going to run Xen (and not things like apache that requires you to 100% up-to-date with security updates), you should consider using Squeeze (or Sid).

The setup described here is tested for Debian Lenny and Ubuntu Maverick virtual machines, but should work for a lot more.

Installation and configuration

First install the hypervisor, xen kernel and xen-tools. If you use a 64 bits system (eg: amd64 architecture), then you should do:

aptitude -P install xen-hypervisor-4.0-amd64 linux-image-xen-amd64

Otherwise, for a 32 bits system, you should do:

aptitude -P install xen-hypervisor-4.0-i386 linux-image-xen-686

To get Xen HVM support Xen 4.0 Wiki

apt-get install xen-qemu-dm-4.0

Debian Squeeze and Sid use Grub 2, and the defaults are wrong for Xen. The Xen hypervisor (and not just a Xen-ready kernel!) should be the first entry, so do this:

mv -i /etc/grub.d/10_linux /etc/grub.d/50_linux
update-grub2

Then, disable the OS prober, so that you don’t get boot entries for each virtual machine you install on a volume group. Note that if you are running a computer with multi-boot with for example Windows, this will also remove the entries for it, which might not be what you wish for.

echo "" >> /etc/default/grub
echo "# Disable OS prober to prevent virtual machines on logical volumes from appearing in the boot menu." >> /etc/default/grub
echo "GRUB_DISABLE_OS_PROBER=true" >> /etc/default/grub
update-grub2

Per default, Xen tries to state-save the VMs upon shutdown. Sometimes there are problems with that and because it is also clean to just have the VMs shutdown upon host shutdown, set these parameters in /etc/default/xendomains to make sure they get shut down normally if you want that:

XENDOMAINS_RESTORE=false
XENDOMAINS_SAVE=""

In /etc/xen/xend-config.sxp enable the network bridge by commenting out the line that was already there for that. (You may check XenNetworking page in Xen wiki.)

(network-script 'network-bridge antispoof=yes')

The antispoof=yes will activate Xen firewall to prevent that one of your VM uses an IP that it is not allowed to use (for example, if a domU was to use the gateway as its IP, it could seriously break your network, this will prevent it). In this case, you will need to specify the IP of your domU in the vif statement of your domUs.

This config file also have options to set the memory and CPU usage for your dom0, which you might want to change.

If you want, you can also use xen-tools for setting-up a domU (you will need to install it with "aptitude install xentools"). Note that the package dtc-xen also offers the same kind of functionality that xen-tools gives (eg: easy setup of VMs). You can use dtc-xen only for that, if you disable its SOAP daemon (you would disable it using: update-rc.d -f dtc-xen remove). DTC-Xen also offers installation of CentOS VMs using yum, which might be handy as well.

Then, to configure xen-tools, you can edit /etc/xen-tools/xen-tools.conf which contains default values that the xen-create-image script will use. These are some real-life examles of params that may need to be changed:

# Virtual machine disks are created as logical volumes in volume group 'universe' (hint: LVM storage is much faster than file)
lvm = universe
 
install-method = debootstrap
 
size   = 50Gb      # Disk image size.
memory = 512Mb    # Memory size
swap   = 2Gb    # Swap size
fs     = ext3     # use the EXT3 filesystem for the disk image.
dist   = `xt-guess-suite-and-mirror --suite` # Default distribution to install.
 
# Default gateway and netmask for new VMs
gateway    = x.x.x.x
netmask    = 255.255.255.0
 
# When creating an image, interactively setup root password
passwd = 1
 
# Prevents new VMs using some generic mirror, but actually uses the one from the Dom0.
mirror = `xt-guess-suite-and-mirror --mirror`
 
mirror_maverick = http://nl.archive.ubuntu.com/ubuntu/
 
# Ext3 had some weird settings per default, like noatime. If you want to change that, set it to 'defaults'
ext3_options     = defaults
 
# Let xen-create-image use pygrub, so that the grub from the VM is used, which means you no longer need to store kernels outside the VMs. Keeps things very flexible.
pygrub=1

Now you should reboot. After that, you can create virtual machines with this command:

xen-create-image --hostname <hostname> --ip <ip> --scsi --vcpus 2 --pygrub --dist <lenny|maverick|whatever>

The --scsi makes sure the VM uses normal SCSI HD names like sda. When creating a Ubuntu Maverick image, for instance, it won't boot without this option, because the default is xvda. xvda is used to make it clear it is a virtualized disk, but a non-xen kernel, like a stock pv_ops one in Ubuntu, doesn't know what those are (see notes below about the xen-blkfront driver for this, though). You can also set scsi=1 in /etc/xen-tools/xen-tools.conf to make this default.

Notes

Kernel versions

The new 2.6.32 kernel images have paravirt_ops-based Xen dom0 and domU support. When you create an image for Ubuntu Maverick, which includes a kernel that has pv_ops, it will therefore not use a Xen kernel, but the Ubuntu stock one, as it is capable of running on Xen's hypervisor.

For those who want to test the 2.6.32 kernel domU on an earlier dom0, you have to make sure that the xen-blkfront domU driver is loaded, and can find the root and other disk partitions. This is no longer the case if you still use the deprecated hda* or sda* device names in domU .cfg files. Switch to xvda* devices, which also work with 2.6.18 and 2.6.26 dom0 kernels.

There are also the backward-looking options:

Bugs You May Encounter

Debian Bug #584152 Error during xen-create-image: mkfs.ext3: /lib/libblkid.so.1: version `BLKID_2.17' not found (required by mkfs.ext2). Solve this by downgrading the mkfs tool. Xen Bug #1620 Error starting xend.

Lenny to Squeeze Upgrading/Transition

Upgrading a server to Squeeze that uses both Lenny Dom0 and DomU's is fairly straight forward. There are a few catches that one needs to be aware of however: Reference

  • Dom0 Issues

    • The Xen packages will not upgrade themselves. They must be manually removed and the latest Xen packages must be installed from the Debian Squeeze repository through apt.
    • pygrub in Xen-4.0 will need to be patched as per #599243

  • DomU Issues

    • A Squeeze DomU will not be able to boot on the Xen-3.2 package supplied by Lenny because this older version will not support grub2. A Lenny DomU can be upgraded to Squeeze while running on a Lenny Dom0 but it will not be able to be booted until the Dom0 has been upgraded to the Xen-4.0 packages.
    • The entries added to chain load grub1 to grub2 will not allow pygrub to find the correct partition. Before rebooting a freshly upgraded Squeeze DomU, make sure to rename or remove /boot/grub/menu.lst. This will force pygrub to look for the /boot/grub/grub.cfg file which will be in the correct format.

Using Debian-Installer

The page DebianInstaller/Xen contains instructions on how to install Xen Dom0 and Etch DomU with DebianInstaller.See above for details of installing Lenny using Debian Installer.

Package maintenance

Debian's Xen packages are maintained by the pkg-xen project. (developers' mailing list)

The Debian Developer's Package Overview page lists source packages that are maintained by the team.

Common Errors

dom0 automatic reboots

  • {i} Note: if Xen is crashing and reboot automatically, you may want to use noreboot xen option, to prevent it from rebooting automatically. Grub example :

    title           Xen 3.1-1-i386 / Debian GNU/Linux, kernel 2.6.18-6-xen-686
    root            (hd0,0)
    kernel          /xen-3.1-1-i386.gz noreboot
    module          /vmlinuz-2.6.18-6-xen-686 root=/dev/foo ro console=tty0
    module          /initrd.img-2.6.18-6-xen-686

Hangs on boot on system with >32G RAM

System with >32G RAM can hang on boot after "system has x VCPUS" and before "Scrubbing Free RAM". This is due to a limitation of the paravirt ops domain 0 kernel in Squeeze which prevents it from using more than 32G. Add

  • dom0_mem=32G

to your hypervisor command line to work around this issue. The remaining RAM will still be available for guest use.

Error "Device ... (vif) could not be connected"

You need to configure some basic networking between dom0 and domU. Edit /etc/xen/xend-config.sxp

#(network-script network-dummy)
(network-script network-bridge)

for a basic bridge networking, and restart xend.

error: CDROM boot failure

You get the error :

  • CDROM boot failure code 0002
    or CDROM boot failure code 0003
    Boot from cd-Rom failed
    Fatal: Could not read the boot disk.

That's because Xen can't boot from a cdrom iso image at the moment. i.e you can't have tap:aio:/path/to/mycd.iso,hdc:cdrom,r or file:/path/to/mycd.iso,hdc:cdrom,r.

Workaround: use losetup to create a loopback device for the cdrom ISO image, then use it in Xen configuration file. for example :

  • #First, check which loop device is free
    $losetup -f
    /dev/loop9
    #Then create a loopback device
    $losetup -f /path/to/mycd.iso
    losetup /dev/loop9
    /dev/loop9: [fe04]:3096598 (/path/to/mycd.iso)

Now you can use /dev/loop9 in xen configuration file (/etc/xen/foobar.cfg) :

  • ...
    disk = [ 'phy:/dev/vg1/xpsp3,ioemu:hda,w', 'phy:/dev/loop/0,ioemu:hdc:cdrom,r' ]
    ...

then boot/install the guest OS.

note: yo should switch back to the tap:aio:/path/to/mycd.iso,hdc:cdrom,r syntax after installation, since loop back have to be recreated after you reboot the host system.

4gb seg fixup errors

Solution:

echo 'hwcap 0 nosegneg' > /etc/ld.so.conf.d/libc6-xen.conf && ldconfig

Read this XenFaq entry for more info.

No login prompt when using `xm console`

Using a lenny domU, make sure you have hvc0 listed in inittab, like 1:2345:respawn:/sbin/getty 38400 hvc0. There happened to be a lot of changes of default console unit used by Xen (tty1, xvc0, hvc0 etc) but for a Lenny domU (version > 2.6.26-9) it's hvc0.

'clocksource/0: Time went backwards'

If a domU crashes or freezes while uttering the famous lasts words 'clocksource/0: Time went backwards', your domU is likely using the xen clocksource instead of its own clock ticks. In practice, this seems to be the cause of infrequent lockups under load (and/or problems with suspending).

see http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1098

workaround #1

A workaround is to decouple the clock in the domU from the dom0:

In your dom0 and domU /etc/sysctl.conf add the line: xen.independent_wallclock=1. On the dom0, edit the configuration file of the domU (e.g. /etc/xen/foobar.cfg and add (or expand) the extra-line: extra="clocksource=jiffies".

These settings can be activated without rebooting the domU. After editing the configuration files, issue sysctl -p and echo "jiffies"> /sys/devices/system/clocksource/clocksource0/current_clocksource on the domU prompt.

Because the clock won't be relying on the dom0 clock anymore, you probably need to use ntp on the domU to synchronize it properly to the world.

workaround #2

Another possibility ist to use the behaviour of the previous xen-kernel settings: clocksource=jiffies and independent_wallclock=0

Setting clocksource=jiffies for the dom0 and each domU as kernel parameter has eliminated the "Time went backwards" for me (14 dom0s and 27 domUs running stable for two weeks). You can check the values with

cat /sys/devices/system/clocksource/clocksource0/current_clocksource

and

cat /proc/sys/xen/independent_wallclock

With these settings, ntp ist only needed in the dom0. If you change the time in a domU while ntp is running on the according dom0, time will be corrected within a few minutes in the domU. Hint: I didn't manage to influence the time of the domU with setting the time in the dom0 with date or hwclock, nevertheless ntp seems to do this (http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=534978#29).

workaround #3

There are cases where setting the clocksource to jiffies just makes the clock more unstable and leads to continous resets. A working solution appears to be the following:

  • set independent_wallclock to 0 (all domains; VMs will follow dom0's clock)
  • set clocksource to xen (it's the default in lenny)
  • configure ntpd in dom0 only; set "disable kernel" in ntp.conf

This succeeded in stabilizing a Xen server's clock where all other workarounds failed.

More information can be found at http://tinyurl.com/375jza8. You can browse for the whole process at http://tinyurl.com/2veotke

"Error: Bootloader isn't executable"

The above, rather cryptic, error (when starting a domU using xen-utils/xm create) is due to xen-utils not being able to find PyGrub. Modify your xm-debian.cfg config file to use the absolute directory (ie. bootloader="/usr/lib/xen-3.2-1/bin/pygrub" instead of bootloader="pygrub") and your domU should boot up fine.

"ERROR (XendCheckpoint:144) Save failed on domain mydomu32 (X)."

xm save/migration of a 32-Bit domU on a 64-Bit dom0 fails. It seems this is not supported with linux-image-2.6.26-2-xen-amd64 (http://readlist.com/lists/lists.xensource.com/xen-users/4/24225.html). One workaround is to use a 64-Bit Hypervisor with a 32-Bit dom0 (http://lists.xensource.com/archives/html/xen-users/2008-12/msg00404.html). See also 526695

"network routing for hvm guests:"

ERROR in /var/log/xen/qemu-dm-[.*].log:

bridge xenbr0 does not exist!

/etc/xen/scripts/qemu-ifup: could not launch network script

When using routing instead of bridging there seems to be problems for hvm guests. Here a very bad hack for it: prerequsites:

in "/etc/xen/xend-config.sxp"

  • (network-script 'network-route netdev=<ethX,internet_you_want_to_use>')
    (vif-script vif-route)

in your domU config file

  • ...
    vif = [ 'type=ioemu, mac=00:16:3e:XX:XX:XX, vifname=vif-<domU-name>, ip=<domU-ip>, bridge=<ethX,nic_you_want_to_use>' ]
    ...

than:

In "/etc/xen/scripts/qemu-ifup" disable with a #

  • # brctl addif $2 $1 

insert

  • gwip=`ip -4 -o addr show primary dev "$2" | awk '$3 == "inet" {print $4;exit}'| sed 's#/.*##'`
    ip link set "$1" up arp on
    ip addr add $gwip dev "$1"

after starting you domU

  • ip route show
    
    ip route del <domU-ip> dev vif-<domU-name>
    
    ip addr show (should show a tap device with your <dom0-IP of the ethX,nic_you_want_to_use>)
    
    ip route add to <domU-ip> via <dom0-IP of the ethX,nic_you_want_to_use> dev tapX

pretty bad but works...

"network bridging for xen 4.0 with multiple interfaces:"

see Bug http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=591456

Using xen 4.0 under squeeze I had trouble to let my domUs use a specific nic in dom0. The bugfix above might not be fully sufficient to solve it, here is what you have to do:

1.) in set /etc/xen/xend-config.sxp "(network-script network-bridge-wrapper)"

2.) create /etc/xen/scriptsnetwork-bridge-wrapper like this (don't forget to chmod 755):

.

#(!)/bin/bash
# next two lines were good for xen-3.2.1 not for xen-4.0x anymore
#/etc/xen/scripts/network-bridge netdev=eth0 bridge=xenbr0 start
#/etc/xen/scripts/network-bridge netdev=eth1 bridge=xenbr1 start

# this works for xen-4.0x
# xen-utils-common in squeeze don't produce this script (yet) which is needed

if [ ! -f ../scripts/hotplugpath.sh ];then
        echo -e "SBINDIR=\"/usr/sbin\"
BINDIR=\"/usr/bin\"
LIBEXEC=\"/usr/lib/xen/bin\"
LIBDIR=\"/usr/lib\"
SHAREDIR=\"/usr/share\"
PRIVATE_BINDIR=\"/usr/lib/xen/bin\"
XENFIRMWAREDIR=\"/usr/lib/xen/boot\"
XEN_CONFIG_DIR=\"/etc/xen\"
XEN_SCRIPT_DIR=\"/etc/xen/scripts\"" > /etc/xen/scripts/hotplugpath.sh
        chown root:root /etc/xen/scripts/hotplugpath.sh
        chmod 755 /etc/xen/scripts/hotplugpath.sh
fi

/etc/xen/scripts/network-bridge netdev=eth0 start

# if you want to bind a NIC in domU to another interface in dom0 (bridging mode) than:
# 1.) list all dom0 interfaces you want to be able to use (except your eth0!) in "more_bridges" below
# 2.) in the domU config use: vif = [ 'mac=00:16:3e:xx:xx:xx, bridge=ethX' ] with ethX being the original device of dom0 that this domU should use
# 3.) using bridging, all interfaces in dom0 that you want to use  have to be valid configured BEFORE you run this script, i.e. before starting xend the first time.
#       (use ping -I ethX <target your gateway> to CHECK THAT BEFORE, and don't blame me if u plugged the cable into the wrong nic port ;-)
# 4.) remember, in the background xen does move the link to another name, creates a new interface etc etc... we don't care about this here, it just works fine for now

# here I want to prepare to other nics that I can choose from in the domU configs
more_bridges="eth1 eth2"

for i in $more_bridges; do
        ip addr show dev $i | egrep inet > /dev/null 2>&1
        if [ $? == 0 ];then
                ip link set $i down
                /etc/xen/scripts/network-bridge netdev=$i start
                ip link set $i up
        else
                echo -e "\nFailed to set up a bridge!\nYour device $i in dom0 seems not to be configured, so I won't try to use it as part of a bridge for any domU\n"
        fi
done

I tested this, it worked and had no side effects on the first glance, still there is no guarantee ;-)

"XENBUS: Device with no driver: device/vbd/..."

This means you do not have xen-blkfront/xen-blkback driver loaded.

If you're upgrading from 2.6.26.x (or any other old version) to 2.6.32.x domU kernel, update-initramfs (running in 2.6.26.x environment) fails to recognize the need for xen-*front modules and will not include these in initrd image, causing reboot to fail.

OTOH, if you do have xen-*.ko modules in initrd image, this message can be ignored. Drivers will be loaded in later stage automatically.

PV drivers on HVM guest

It may be possible to build the PV drivers for use on HVM guests. These drivers are called unmodified_drivers and are part of the xen-unstable.hg repository. You can fetch the repository using mercurial thus:

  •   hg clone http://xenbits.xen.org/xen-unstable.hg

The drivers reside under xen-unstable.hg/unmodified_drivers/linux-2.6. The README in this directory gives compilation instructions.

A somewhat dated, detailed set of instructions for building these drivers can be found here:

http://wp.colliertech.org/cj/?p=653

NUMA with xen 3.4

In order to activate NUMA awareness in the hypervisor on multi-socket AMD and Intel hosts, use the following:

acpi=on numa=on

by default, NUMA is off.

Resources


CategoryNetwork