Differences between revisions 6 and 7
Revision 6 as of 2020-07-01 10:41:27
Size: 18566
Editor: ?DmitriySotnikov
Comment:
Revision 7 as of 2020-07-01 10:44:18
Size: 18617
Editor: ?DmitriySotnikov
Comment:
Deletions are marked like this. Additions are marked like this.
Line 30: Line 30:
You should then be able to list your domains: Теперь вы можете получить список всех своих доменов:

Translation(s): English - Español - Norsk - Русский

(!) ?Discussion


Введение

KVM - это технология полной виртуализации для Linux на x86 (включая 64-bit) оборудование, содержащее расширения виртуализации (Intel VT или AMD-V). Сам KVM состоит из загружаемого модуля ядра, kvm.ko, который обеспечивает базовую инфраструктуру виртуализации и специфичный для процессора модуль, kvm-intel.ko или kvm-amd.ko.

В Debian, Xen и VirtualBox являются альтернативами KVM.

Установка

Установим пакеты qemu-kvm с помощью команды apt-get или aptitude, например, будем использовать следующую команду:

# aptitude install qemu-kvm libvirt-clients libvirt-daemon-system

Демон libvirt будет запускаться автоматически во время загрузки и загружать соответствующие kvm модули, kvm-amd или kvm-intel, которые поставляются вместе с Linux ядром пакета Debian. Если вы намерены создавать виртуальные машины из командной строки, то установите virtinst.

Для того, чтобы иметь возможность управлять виртуальными машинами (ВМ) от обычного пользователя, вы должны добавить этого пользователя в группы kvm и libvirt:

# adduser <youruser> kvm
# adduser <youruser> libvirt

Теперь вы можете получить список всех своих доменов:

# virsh list --all

libvirt defaults to qemu:///session for non-root. So from <youruser> you'll need to do:

$ virsh --connect qemu:///system list --all

You can use LIBVIRT_DEFAULT_URI to change this.

Создание гостевой ВМ

Самый простой способ для создания и управления гостевыми ВМ это использовать графическое приложение Virtual Machine Manager virt-manager.

Также, вы можете создать гостевую ВМ в командной строке. Ниже приведен пример для создания гостевой ВМ Debian Squeeze с именем squeeze-amd64:

virt-install --virt-type kvm --name squeeze-amd64 --memory 512 --cdrom ~/iso/Debian/cdimage.debian.org_mirror_cdimage_archive_6.0.10_live_amd64_iso_hybrid_debian_live_6.0.10_amd64_gnome_desktop.iso --disk size=4 --os-variant debiansqueeze

Since the guest has no network connection yet, you will need to use the GUI virt-viewer to complete the install.

You can avoid pulling the ISO by using the --location option. To obtain text console for the installation you can also provide --extra-args "console=ttyS0":

virt-install --virt-type kvm --name squeeze-amd64 \
--location http://httpredir.debian.org/debian/dists/squeeze/main/installer-amd64/ \
--extra-args "console=ttyS0" -v --os-variant debiansqueeze \
--disk size=4 --memory 512

Для полностью автоматизированной установке смотрите preseed или debootstrap.

Setting up bridge networking

Between VM guests

By default, QEMU uses macvtap in VEPA mode to provide NAT internet access or bridged access with other guest. Unfortunately, this setup could not let the host to communicate with any guests.

Between VM host and guests

To let communications between VM host and VM guests, you may setup a macvlan bridge on top of a dummy interface similar as below. After the configuration, you can set using interface dummy0 (macvtap) in bridged mode as the network configuration in VM guests configuration.

modprobe dummy
ip link add dummy0 type dummy
ip link add link dummy0 macvlan0 type macvlan mode bridge
ifconfig dummy0 up
ifconfig macvlan0 192.168.1.2 broadcast 192.168.1.255 netmask 255.255.255.0 up

Between VM host, guests and the world

In order to let communications between host, guests and outside world, you may set up a bridge and as described at QEMU page.

For example, you may modify network configuration file /etc/network/interfaces for setup ethernet interface eth0 to a bridge interface br0 similar as below. After the configuration, you can set using Bridge Interface br0 as the network connection in VM guests configuration.

auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static
        address 192.168.1.2
        netmask 255.255.255.0
        network 192.168.1.0
        broadcast 192.168.1.255
        gateway 192.168.1.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
        dns-nameservers 8.8.8.8

Managing VMs from the command-line

You can then use the virsh(1) command to start and stop virtual machines. VMs can be generated using virtinst. For more details see the libvirt page. Virtual machines can also be controlled using the kvm command in a similar fashion to QEMU. Below are some frequently used commands:

Start a configured VM guest "VMGUEST":

# virsh start VMGUEST

Notify the VM guest "VMGUEST" to graceful shutdown:

# virsh shutdown VMGUEST

Force the VM guest "VMGUEST" to shutdown in case it is hanged, i.e. graceful shutdown not work:

# virsh destroy VMGUEST

Managing VM guests with a GUI

On the other hand, if you want to use a graphical UI to manage the VMs, you can use the Virtual Machine Manager virt-manager.

Automatic guest management on host shutdown/startup

Guest behavior on host shutdown/startup is configured in /etc/default/libvirt-guests.

This file specifies whether guests should be shutdown or suspended, if they should be restarted on host startup, etc.

First parameter defines where to find running guests. For instance:

# URIs to check for running guests
# example: URIS='default xen:/// vbox+tcp://host/system lxc:///'
URIS=qemu:///system

Performance Tuning

Below are some options which can improve performance of VM guests.

CPU

  • Assign virtual CPU core to dedicated physical CPU core
    • Edit the VM guest configuration, assume the VM guest name is "VMGUEST" having 4 virtual CPU core
      # virsh edit VMGUEST
    • Add below codes after the line "<vcpu ..."

      <cputune>
        <vcpupin vcpu='0' cpuset='0'/>
        <vcpupin vcpu='1' cpuset='4'/>
        <vcpupin vcpu='2' cpuset='1'/>
        <vcpupin vcpu='3' cpuset='5'/>
      </cputune>
      where vcpu are the virtual cpu core id; cpuset are the allocated physical CPU core id. Adjust the number of lines of vcpupin to reflect the vcpu count and cpuset to reflect the actual physical cpu core allocation. In general, the higher half physical CPU core are the hyperthreading cores which cannot provide full core performance while have the benefit of increasing the memory cache hit rate. A general rule of thumb to set cpuset is:
    • For the first vcpu, assign a lower half cpuset number. For example, if the system has 4 core 8 thread, the valid value of cpuset is between 0 to 7, the lower half is therefore between 0 to 3.
    • For the second and the every second vcpu, assign its higher half cpuset number. For example, if you assigned the first cpuset to 0, then the second cpuset should be set to 4.

      For the third vcpu and above, you may need to determine which physical cpu core share the memory cache more to the first vcpu as described here and assign it to the cpuset number to increase the memory cache hit rate.

Disk I/O

Disk I/O is usually the bottleneck of performance due to its characteristics. Unlike CPU and RAM, VM host may not allocate a dedicated storage hardware for a VM. Worse, disk is the slowest component between them. There is two types of disk bottleneck, throughput and access time. A modern harddisk can perform 100MB/s throughput which is sufficient for most of the systems. While a modern harddisk can only provides around 60 transactions per seconds (tps).

For VM Host, you can benchmark different disk I/O parameters to get the best tps for your disk. Below is an example of disk tuning and benchmarking using fio:

  • # echo deadline > /sys/block/sda/queue/scheduler
    # echo 32 > /sys/block/sda/queue/iosched/quantum
    # echo 0 > /sys/block/sda/queue/iosched/slice_idle
    # echo 1 > /proc/sys/vm/dirty_background_ratio
    # echo 50 > /proc/sys/vm/dirty_ratio
    # echo 500 > /proc/sys/vm/dirty_expire_centisecs
    # /sbin/blockdev --setra 256 /dev/sda
    # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=/opt/fio.tmp --bs=4k --iodepth=64 --size=8G --readwrite=randrw --rwmixread=75 --runtime=60

For Windows VM guests, you may wish to switch between the slow but cross-platform Windows built-in IDE driver or fast but KVM specific VirtIO driver. As a result, the installation method for Windows VM guest provided below is a little bit complicated while provides a way to install both driver and use one for your needs. Under virt-manager:

  • Native driver for Windows VM guests
    • Create new VM guest with below configuration:
      • IDE storage for Windows OS container, assume with filename WINDOWS.qcow2
      • IDE CDROM, attach Windows OS ISO to CDROM
    • Start VM guest and install the Windows OS as usual
    • Shutdown VM guest
    • Reconfigure VM guest with below configuration:
      • Add a dummy VirtIO / VirtIO SCSI storage with 100MB size, e.g. DUMMY.qcow2
      • Attach VirtIO driver CD ISO to the IDE CDROM

    • Restart VM guest
    • Install the VirtIO driver from the IDE CDROM when Windows prompt for new hardware driver
    • Shutdown VM guest
    • Reconfigure VM guest with below configuration:
      • Remove IDE storage for Windows OS, DO NOT delete WINDOWS.qcow2
      • Remove VirtIO storage for dummy storage, you can delete DUMMY.qcow2
      • Remove IDE storage for CD ROM
      • Add a new VirtIO / VirtIO SCSI storage and attach WINDOWS.qcow2 to it
    • Restart the VM guest
  • Native driver for Linux VM guests
    • Select VirtIO / VirtIO SCSI storage for the storage containers
    • Restart the VM guest
  • VirtIO / VirtIO SCSI storage
    • VirtIO SCSI storage provides richer features than VirtIO storage when the VM guest is attached with multiple storage. The performance are the same if the VM guest was only attached with a single storage.
  • Disk Cache
    • Select "None" for disk cache mode
  • Block dataplane
    • Edit the VM guest configuration, assume the VM guest name is "VMGUEST"
    • # virsh edit VMGUEST
    • At the first line "<domain ...", add "xmlns:..." option:

      <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
    • Before the last line "</domain>", add "qemu:commandline" section:

        <qemu:commandline>
          <qemu:arg value='-set'/>
          <qemu:arg value='device.virtio-disk0.scsi=off'/>
          <qemu:arg value='-set'/>
          <qemu:arg value='device.virtio-disk0.x-data-plane=on'/>
        </qemu:commandline>

Network I/O

Using virt-manager:

  • Native driver for Windows VM guests
    • Select VirtIO for the network adapter
    • Attach VirtIO driver CD ISO to the IDE CDROM

    • Restart the VM guest, Windows found a new network adapter hardware, install the VirtIO driver from the IDE CDROM
  • Native driver for Linux VM guests
    • Select VirtIO for the network adapter
    • Restart the VM guest

Memory

  • Huge Page Memory support
    • Calculate the huge page counts required. Each huge page is 2MB size, as a result we can use below formula for the calculation.
      Huge Page Counts = Total VM Guest Memory In MB / 2
      e.g. 4 VM guests, each VM guest using 1024MB, then huge page counts = 4 x 1024 / 2 = 2048. Note that the system may be hang if the acquired memory is more than that of the system available.
    • Configure ?HugePages memory support by using below command. Since Huge memory might not be allocated if it is too fragmented, it is better to append the code to /etc/rc.local

      echo 2048 > /proc/sys/vm/nr_hugepages
      mkdir -p /mnt/hugetlbfs
      mount -t hugetlbfs hugetlbfs /mnt/hugetlbfs
      mkdir -p /mnt/hugetlbfs/libvirt/bin
      systemctl restart libvirtd
    • Reboot the system to enable huge page memory support. Verify huge page memory support by below command.
      # cat /proc/meminfo | grep HugePages_
      HugePages_Total:    2048
      HugePages_Free:     2048
      HugePages_Rsvd:        0
      HugePages_Surp:        0
    • Edit the VM guest configuration, assume the VM guest name is "VMGUEST"
      # virsh edit VMGUEST
    • Add below codes after the line "<currentMemory ..."

      <memoryBacking>
        <hugepages/>
      </memoryBacking>
    • Start the VM guest "VMGUEST" and verify it is using huge page memory by below command.
      # virsh start VMGUEST
      # cat /proc/meminfo | grep HugePages_
      HugePages_Total:    2048
      HugePages_Free:     1536
      HugePages_Rsvd:        0
      HugePages_Surp:        0
      Huge Page Counts = Total VM Guest Memory In MB / 2

Миграция гостя на хост Debian

Миграция гостя из RHEL/CentOS 5.x

Есть несколько вещей, которые нужно модифицировать в гостевых конфигурационных файлах XML (/etc/libvirt/qemu/*.xml):

  • Переменная Machine в секции <os> должна быть pc, а не rhel5.4.0 или похожие.

  • Переменная Emulator должна указывать на /usr/bin/kvm, а не /usr/libexec/qemu-kvm.

Иными словами, соответствующие разделы должны выглядеть примерно так:

  <os>
    <type arch='x86_64' machine='pc'>hvm</type>

  --- snip ---

  <devices>
    <emulator>/usr/bin/kvm</emulator>

Если у вас был настроен сетевой мост на хосте CentOS, то обратитесь к данной статье Wiki, в которой описано, как настроить его в Debian.

Диагностика

Не работает сетевой мост

virt-manager использует виртульную сеть для гостевых машин, по умолччанию это маршрут в 192.168.122.0/24 и вы можете посмотреть его набрав ip route как root.

Если этого маршрута нет в таблице маршрутизации ядра, то гостевые машины не смогут подключтся к сети и вы не сможете завершить создание гостевых машин.

Это очень легко исправить, откройте virt-manager и пройдите на вкладку "Edit" -> "Host details" -> "Virtual networks". Оттуда вы сможете создать виртуальную сеть для имеющихся машин или изменить значение по умолчанию. Обычно это проблема появляется тогда, когда сеть по умолчанию не запущена.

cannot create bridge 'virbr0': File exists:

Чтобы решить эту проблему вы можете удалить virbr0, запустив:

brctl delbr virbr0

Откройте virt-manager и пройдите на вкладку "Edit" -> "Host details" -> "Virtual networks" запустите сеть по умолчанию.

Во можете проверить сетевой статус (netstatus).

virsh net-list --all

При необходимости, можно использовать сетевой мост BridgeNetworkConnections

Смотри также

Внешние ссылки

Статьи хоть и старые, но более или менее дают представление, что это и как с этим работать:


CategorySystemAdministration