Differences between revisions 63 and 64
Revision 63 as of 2015-06-08 03:37:32
Size: 11638
Editor: ?dbp
Comment: Add description for virtio-win viostor driver installation method
Revision 64 as of 2015-07-07 01:36:51
Size: 11525
Editor: ?dbp
Comment: Latest (1.0.105) virtio-scsi windows driver is much stable than previous version (1.0.096)
Deletions are marked like this. Additions are marked like this.
Line 150: Line 150:
For Windows VM Clients, the stability of storage driver is [[https://fedorapeople.org/groups/virt/virtio-win/CHANGELOG|still improving]]. You may wish to switch between the slow but stable Windows built-in IDE driver or fast but less stable custom VirtIO driver. As a result, the installation method for Windows VM Clients provided below is a little bit complicated while provides a way to install both driver and use one for your needs. Under virt-manager: For Windows VM Clients, you may wish to switch between the slow but cross-platform Windows built-in IDE driver or fast but KVM specific VirtIO driver. As a result, the installation method for Windows VM Clients provided below is a little bit complicated while provides a way to install both driver and use one for your needs. Under virt-manager:

Translation(s): English - Español - Norsk - Русский

(!) Discussion


Introduction

KVM is a full virtualization solution for Linux on x86 (64-bit included) hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko.

In Debian, Xen and VirtualBox are alternatives to KVM.

Installation

Install the qemu-kvm package with apt-get or aptitude, e.g. using this command:

# aptitude install qemu-kvm libvirt-bin

The daemon libvirt-bin daemon will start automatically at boot time and load the appropriate kvm modules, kvm-amd or kvm-intel, which are shipped with the Linux kernel Debian package. If you intend to create VMs from the command-line, install virtinst.

In order to be able to manage virtual machines as regular user you should put this user into the kvm and libvirt groups:

# adduser <youruser> kvm
# adduser <youruser> libvirt

You should then be able to list your domains:

# virsh list --all

libvirt defaults to qemu:///session for non-root. So from <youruser> you'll need to do:

$ virsh --connect qemu:///system list --all

You can use LIBVIRT_DEFAULT_URI to change this.

Setting up bridge networking

In order to provide full network accessibility for the VM clients, you may set up a bridge and as described at QEMU page. For example, you may modify network configuration file /etc/network/interfaces for setup ethernet interface eth0 to a bridge interface br0 similar as below. After the configuration, you can set using Bridge Interface br0 as the network connection in VM clients configuration.

auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
        address 192.168.1.2
        netmask 255.255.255.0
        network 192.168.1.0
        broadcast 192.168.1.255
        gateway 192.168.1.1
        dns-nameservers 8.8.8.8

Managing VMs from the command-line

You can then use the virsh(1) command to start and stop virtual machines. VMs can be generated using virtinst. For more details see the libvirt page. Virtual machines can also be controlled using the kvm command in a similar fashion to QEMU. Below are some frequently used commands:

Start a configured VM client "VMCLIENT":

# virsh start VMCLIENT

Notify the VM client "VMCLIENT" to graceful shutdown:

# virsh shutdown VMCLIENT

Force the VM client "VMCLIENT" to shutdown in case it is hanged, i.e. graceful shutdown not work:

# virsh destroy VMCLIENT

Managing VMs with a GUI

On the other hand, if you want to use a graphical UI to manage the VMs, you can use the Virtual Machine Manager virt-manager.

Migrating guests to a Debian host

Migrating guests from RHEL/CentOS 5.x

There are a few minor things in guest XML configuration files (/etc/libvirt/qemu/*.xml you need to modify:

  • Machine variable in <os> section should say pc, not rhel5.4.0 or similar

  • Emulator entry should point to /usr/bin/kvm, not /usr/libexec/qemu-kvm

In other words, the relevant sections should look something like this:

  <os>
    <type arch='x86_64' machine='pc'>hvm</type>

  --- snip ---

  <devices>
    <emulator>/usr/bin/kvm</emulator>

If you had configured a bridge network on the CentOS host, please refer to this wiki article on how to make it work on Debian.

Performance Tuning

Below are some options which can improve performance of VM clients.

CPU

  • Assign virtual CPU core to dedicated physical CPU core
    • Edit the VM client configuration, assume the VM client name is "VMCLIENT" having 4 virtual CPU core
      # virsh edit VMCLIENT
    • Add below codes after the line "<vcpu ..."

      <cputune>
        <vcpupin vcpu='0' cpuset='0'/>
        <vcpupin vcpu='1' cpuset='4'/>
        <vcpupin vcpu='2' cpuset='1'/>
        <vcpupin vcpu='3' cpuset='5'/>
      </cputune>
      where vcpu are the virtual cpu core id; cpuset are the allocated physical CPU core id. Adjust the number of lines of vcpupin to reflect the vcpu count and cpuset to reflect the actual physical cpu core allocation. In general, the higher half physical CPU core are the hyperthreading cores which cannot provide full core performance while have the benefit of increasing the memory cache hit rate. A general rule of thumb to set cpuset is:
    • For the first vcpu, assign a lower half cpuset number. For example, if the system has 4 core 8 thread, the valid value of cpuset is between 0 to 7, the lower half is therefore between 0 to 3.
    • For the second and the every second vcpu, assign its higher half cpuset number. For example, if you assigned the first cpuset to 0, then the second cpuset should be set to 4.

      For the third vcpu and above, you may need to determine which physical cpu core share the memory cache more to the first vcpu as described here and assign it to the cpuset number to increase the memory cache hit rate.

Disk I/O

For Windows VM Clients, you may wish to switch between the slow but cross-platform Windows built-in IDE driver or fast but KVM specific VirtIO driver. As a result, the installation method for Windows VM Clients provided below is a little bit complicated while provides a way to install both driver and use one for your needs. Under virt-manager:

  • Native driver for Windows VM Clients
    • Create new VM client with below configuration:
      • IDE storage for Windows OS container, assume with filename WINDOWS.qcow2
      • IDE CDROM, attach Windows OS ISO to CDROM
    • Start VM client and install the Windows OS as usual
    • Shutdown VM client
    • Reconfigure VM client with below configuration:
      • Add a dummy VirtIO / VirtIO SCSI storage with 100MB size, e.g. DUMMY.qcow2
      • Attach VirtIO driver CD ISO to the IDE CDROM

    • Restart VM client
    • Install the VirtIO driver from the IDE CDROM when Windows prompt for new hardware driver
    • Shutdown VM client
    • Reconfigure VM client with below configuration:
      • Remove IDE storage for Windows OS, DO NOT delete WINDOWS.qcow2
      • Remove VirtIO storage for dummy storage, you can delete DUMMY.qcow2
      • Remove IDE storage for CD ROM
      • Add a new VirtIO / VirtIO SCSI storage and attach WINDOWS.qcow2 to it
    • Restart the VM client
  • Native driver for Linux VM Clients
    • Select VirtIO / VirtIO SCSI storage for the storage containers
    • Restart the VM client
  • VirtIO / VirtIO SCSI storage
    • VirtIO SCSI storage provides richer features than VirtIO storage when the VM client is attached with multiple storage. The performance are the same if the VM client was only attached with a single storage.
  • Disk Cache
    • Select "None" for disk cache mode

Network I/O

Using virt-manager:

  • Native driver for Windows VM Clients
    • Select VirtIO for the network adapter
    • Attach VirtIO driver CD ISO to the IDE CDROM

    • Restart the VM client, Windows found a new network adapter hardware, install the VirtIO driver from the IDE CDROM
  • Native driver for Linux VM Clients
    • Select VirtIO for the network adapter
    • Restart the VM client

Memory

  • Huge Page Memory support
    • Calculate the huge page counts required. Each huge page is 2MB size, as a result we can use below formula for the calculation.
      Huge Page Counts = Total VM Client Memory In MB / 2
      e.g. 4 VM clients, each VM client using 1024MB, then huge page counts = 4 x 1024 / 2 = 2048. Note that the system may be hang if the acquired memory is more than that of the system available.
    • Configure ?HugePages memory support by using below command. Since Huge memory might not be allocated if it is too fragmented, it is better to append the code to /etc/rc.local

      echo 2048 > /proc/sys/vm/nr_hugepages
      mkdir -p /mnt/hugetlbfs
      mount -t hugetlbfs hugetlbfs /mnt/hugetlbfs
      mkdir -p /mnt/hugetlbfs/libvirt/bin
      systemctl restart libvirtd
    • Reboot the system to enable huge page memory support. Verify huge page memory support by below command.
      # cat /proc/meminfo | grep HugePages_
      HugePages_Total:    2048
      HugePages_Free:     2048
      HugePages_Rsvd:        0
      HugePages_Surp:        0
    • Edit the VM client configuration, assume the VM client name is "VMCLIENT"
      # virsh edit VMCLIENT
    • Add below codes after the line "<currentMemory ..."

      <memoryBacking>
        <hugepages/>
      </memoryBacking>
    • Start the VM client "VMCLIENT" and verify it is using huge page memory by below command.
      # virsh start VMCLIENT
      # cat /proc/meminfo | grep HugePages_
      HugePages_Total:    2048
      HugePages_Free:     1536
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      Huge Page Counts = Total VM Client Memory In MB / 2

Troubleshooting

No network bridge available

virt-manager uses a virtual network for its guests, by default this is routed to 192.168.122.0/24 and you should see this by typing ip route as root.

If this route is not present in the kernel routing table then the guests will fail to connect and you will not be able to complete a guest creation.

Fixing this is simple, open up virt-manager and go to "Edit" -> "Host details" -> "Virtual networks" tab. From there you may create a virtual network of your own or attempt to fix the default one. Usually the problem exists where the default network is not started.

cannot create bridge 'virbr0': File exists:

To solve this probelm you may remove the virbr0 by running:

brctl delbr virbr0

Open virt-manager and go to "Edit" -> "Host details" -> "Virtual networks" start the default network.

You can check the netstatus

virsh net-list --all

Optionally, you can use bridge network BridgeNetworkConnections

See also

External links

Please, add links to external documentation. This is not a place for links to non-free commercial products.


CategorySystemAdministration