NOTE: Work in progress. LAVA is not yet ready for Debian.

See also: https://wiki.linaro.org/Platform/LAVA/LAVA_packaging

Introduction

This document describes the steps required to deploy a LAVA production instance for the purpose of duplicating Automated Testing Service.

Assumptions

The following knowledge is assumed for this document and is thus out of scope:

The following hardware and network is assumed to be available:

Installation of LAVA main server

The LAVA main server is where the automated testing service is hosted. It is responsible for downloading the right image going to be tested, controlling the test run and providing a web user interface for viewing test results and service administration.

Deployment

 $ sudo apt-get install postgresql postgresql-9.1

 $ sudo apt-get install lava-dashboard lava-dispatcher lava-scheduler lava-scheduler-tool lava-server lava-tool linaro-image-tools

Setting up Apache server

In order to get the LAVA web view working, it's necessary to configure and enable the server's HTTP service.

 $ sudo apt-get install apache2

The result should be a file similar to this:

<VirtualHost *:80>
    ServerAdmin webmaster@public.domain.net
    ServerName lava.public.domain.net

    Redirect permanent / https://lava.public.domain.net

    # This is a small directory with just the index.html file that tells
    # users about this instance and has a link to application pages
    DocumentRoot /var/lib/lava/instances/lava/var/www/lava-server
</VirtualHost>

<VirtualHost *:443>
    ServerAdmin webmaster@public.domain.net
    ServerName lava.public.domain.net

    # A self-signed (snakeoil) certificate can be created by installing
    # the ssl-cert package. See
    # /usr/share/doc/apache2.2-common/README.Debian.gz for more info.
    # If both key and certificate are stored in the same file, only the
    # SSLCertificateFile directive is needed.
    SSLEngine On
    SSLCertificateFile /etc/ssl/certs/vm0.public.domain.net-http.pem
    SSLCertificateKeyFile /etc/ssl/private/vm0.public.domain.net-http.pem

    # Allow serving media, static and other custom files
    <Directory /var/lib/lava/instances/lava/var/www>
        Options FollowSymLinks
        AllowOverride None
        Order allow,deny
        allow from all
    </Directory>

    # This is a small directory with just the index.html file that tells users
    # about this instance and has a link to application pages
    DocumentRoot /var/lib/lava/instances/lava/var/www/lava-server

    # These two alias avoid processing images and static content via FastCGI.
    Alias /static /var/lib/lava/instances/lava/var/www/lava-server/static
    Alias /images /var/lib/lava/instances/lava/var/www/lava-server/images

    # uWSGI mount point. For this to work the uWSGI module needs be loaded.
    # XXX: Perhaps we should just load it ourselves here, dunno.
    #<Location />
    #    SetHandler  uwsgi-handler
    #    uWSGISocket /srv/lava/instances/lava/run/uwsgi.sock
    #</Location>

    # FastCGI mount point. For this to work the FastCGI module needs be loaded.
    FastCGIExternalServer fcgi -socket /var/run/lava-server-fcgi.sock -pass-header Authorization
    # Redirect all requests to the FastCGI socket.
    Alias / fcgi/

    # Make exceptions for static and media.
    # This allows Apache to serve those and offload the application server
    <Location /static>
        SetHandler  none
    </Location>
    # We don't need media files as those are private in our implementation

    # images folder for lava-dispatcher tarballs
    <Location /images>
        SetHandler  none
    </Location>
</VirtualHost>

* Enable the LAVA production site:

 $ sudo a2ensite lava

If everything went fine, now it's possible to access the LAVA main web view through: https://lava.public.domain.net or http://lava.domain

Setting up the LAVA Dispatcher

The LAVA Dispatcher component is responsible for executing the test run by controlling the target devices or slaves.

 $ sudo cp -f /usr/share/pyshared/lava_dispatcher/default-config/lava-dispatcher/device-defaults.conf /etc/xdg/lava-dispatcher/

The following is an example of what a lava-dispatcher.conf file should look like:

LAVA_SERVER_IP = 192.168.101.12

# Location for rootfs/boot tarballs extracted from images
LAVA_IMAGE_TMPDIR=/var/lib/lava/instances/lava/var/www/lava-server/images

# URL where LAVA_IMAGE_TMPDIR can be accessed remotely
LAVA_IMAGE_URL= https://lava.public.domain.net/images

# Location on the device for storing test results.
LAVA_RESULT_DIR=/var/lib/lava/instances/lava/tmp

# Location for caching downloaded artifacts such as hwpacks and images
LAVA_CACHEDIR=/var/lib/lava/instances/lava/var/cache/lava-dispatcher

# This is the address and port of cache proxy service; format is like:
# LAVA_PROXY = http://192.168.1.10:3128/

# This url points to the version of lava-test to be installed with pip
#LAVA_TEST_URL = bzr+http://bazaar.launchpad.net/~le-chi-thu/lava-test/enabled-file-cache/#egg=lava-test

LAVA_TEST_DEB="lava-test"

# Python logging level to use
# # 10 = DEBUG
# # 20 = INFO
# # 30 = WARNING
# # 40 = ERROR
# # Messages with a lower number than LOGGING_LEVEL will be suppressed
# LOGGING_LEVEL = 10

To allow LAVA Dispatcher component to execute the tests, it's necessary to add to it target or slave devices. These devices are the machines going to run the test for real.

In order to add a target device, it's necessary to specify two things: The device type and settings.

interrupt_boot_prompt = The highlighted entry will be executed automatically

interrupt_boot_command = c

image_boot_msg = Initializing cgroup subsys cpuset

boot_cmds = search --set=root --label testboot,
    linux /vmlinuz root=LABEL=testrootfs ro "console=ttyS0,115200n8" "elevator=cfq",
    initrd /initrd.img,
    boot

bootloader_prompt = grub>

This tells the dispatcher how to properly boot images which will run on this type of target device. The following is a short description of each parameter used in this file:

device_type = i386

connection_command = slogin -t lavaconsole@dom0.public.domain.net -i /root/lava_identity console prato
hard_reset_command = slogin  lavaconsole@dom0.public.domain.net -i /root/lava_identity hard-reset prato

# Test image recognization string
TESTER_STR = root@vm1
tester_hostname = vm1

The following is a short description of each parameter used in this file:

It's important to note that the commands used above for connection_command and hard_reset_command parameters are very tied to the Collabora's server internal infrastructure which is not covered in this document.

Creating LAVA Target devices or Slaves

The LAVA target device is a real machine, board or VM that boots the image to be tested and run the necessary tests. Target devices can be considered workers or slaves.

Following are instructions to setup a i386 virtual machine called vm1 as a LAVA target device. XXX Place holder. Need to document how to create images.

Next, boot the new virtual machine and access its console shell. Now it's necessary to create some extra disk partitions on the device:

 $ sudo fdisk -S 63 -H 255 -c /dev/vda  # Use your VM hard drive name in place of /dev/vda

Command (m for help): p

Disk /dev/vda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *          63      270334      135136    c  W95 FAT32 (LBA)
/dev/vda2          270336     2097151      913408   83  Linux

Command (m for help): n
Partition type:
   p   primary (2 primary, 0 extended, 2 free)
   e   extended
Select (default p): e
Partition number (1-4, default 3): 3
First sector (270335-16777215, default 270335): 2097152
Last sector, +sectors or +size{K,M,G} (2097152-16777215, default 16777215): 
Using default value 16777215

Command (m for help): n
Partition type:
   p   primary (2 primary, 1 extended, 1 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 5
First sector (2099200-16777215, default 2099200): 
Using default value 2099200
Last sector, +sectors or +size{K,M,G} (2099200-16777215, default 16777215): +128M

Command (m for help): n
Partition type:
   p   primary (2 primary, 1 extended, 1 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 6
First sector (2363392-16777215, default 2363392): 
Using default value 2363392
Last sector, +sectors or +size{K,M,G} (2363392-16777215, default 16777215): 
Using default value 16777215

Command (m for help): w
The partition table has been altered!

In order to see the new partition table, reboot the target device:

 $ sudo reboot

Now it's necessary to format the newly-created partitions:

 $ sudo mkfs.vfat /dev/vda5 -n testboot
 $ sudo mkfs.ext3 -q /dev/vda6 -L testrootfs

Change the target device hostname to 'master':

 $ sudo echo 'master' > /etc/hostname

Increase Grub boot timeout to 15 seconds:

 $ sudo sed -i 's/timeout=[0-9\-]*/timeout=15/' /boot/grub/grub.cfg

Reboot again and the target device should be ready to be used:

 $ sudo reboot

Configuring LAVA service

The LAVA service web view provides a set of options that can be configured via an administration panel. Through this panel, it's possible to configure user access, where test results should be stored, reports, devices going to be used, etc. Thus, it's important to keep in mind that via this administration panel it's possible to chance some very important part of the LAVA service, making it a very critical place and only administrators should have permission to access it.

At this point it's necessary to create an admin user for LAVA:

 $ sudo lava-server manage createsuperuser
 Username (Leave blank to use 'root'): lava-admin
 E-mail address: lava-admin@public.domain.net
 Password: 
 Password (again): 
 Superuser created successfully.

With the user created, go to the top right corner of the LAVA web view and log in the system. Once logged in, access the Administration panel clicking on the top right corner again.

Creating Users and Groups

[[File:QA-LAVA-Create-group.png|300px|thumb|right|LAVA - Create group screen]] [[File:QA-LAVA-Create-lava-auto.png|200px|thumb|right|LAVA - Create lava-auto user screen]]

Although guest users may be granted partial access to the LAVA web view, it's important to create users that are going to use the service and also grant them the necessary permissions according to their roles.

Create a normal user group that gives permission to view and submit jobs, and change their own user profile:

Create a user that will be in charge of automatically submitting periodically jobs:

It's also recommended that anyone capable of submitting jobs to the LAVA service have a LAVA user setup the same as above for the lava-auto user.

Creating Bundle Streams

Bundle Streams are places to store specific test results for further analysis. They can be thought of as a sort of directory. Thus, it's important to create well defined bundle streams that will store each type of test result, depending on the best way to filter and store them.

On Collabora's setup there're two main types of bundle streams:

# For each image release and flavor a bundle stream is created. Depending on the frequency the test is executed another bundle stream is also created. Examples:
#:* /public/personal/lava-auto/debian-sid-i386-daily-build/
#:* /public/personal/lava-auto/debian-testing-i386-daily-build/
#:* /public/personal/lava-auto/debian-stable-i386-daily-build/
# Each normal LAVA user who will submit jobs should have their own personal bundle stream. There, the user can submit their test jobs and experiment without pushing potentially-faulty jobs to the official Bundle Streams which evaluates the images periodically. Example:
#:* /public/personal/new_user_foo/
#:* /public/personal/new_user_bar/

[[File:QA-LAVA-Create-bundle-stream.png|300px|thumb|right|Create bundle stream screen]]

To create one of the official bundle streams described above:

The procedure above has to be done for each needed official bundle stream. After that, the lava-auto user will be able to store test results on these bundle streams.

To create a personal user bundle stream as described above:

At this point, New_User_Foo will be able to store test results on their personal bundle stream.

Adding target devices

[[File:QA-LAVA-Create-device-type.png|300px|thumb|right|Create device type screen]] [[File:QA-LAVA-Create-device.png|240px|thumb|right|Create device screen]]

In order to control the target devices configured on the Setting up LAVA Dispatcher section, it's necessary to add them in the Administration Panel.

Configuring the device type:

Note, the device type name needs to be exactly the same name used when configuring the LAVA Dispatcher component.

Configuring the device target:

The device hostname also needs to be exactly the same file name (without the file extension) used when configuring the LAVA Dispatcher component.

Setting up daily testing

Using some LAVA tools and Collabora scripts, it's possible to configure a machine to periodically submit jobs (or tests) to the LAVA production service. With that, it is possible to configure a set of tests to be executed daily, weekly, monthly, etc.

It is assumed the machine used to submit periodical tests is the same machine that hosts the production LAVA service.

Authenticating user on LAVA web service

In order to submit jobs to a LAVA service, it's necessary to log in with the service web view. For periodical testing, the lava-auto user is going to be used.

At this point the lava-auto token is already created. Now it's necessary to create a new lava-auto user on the server machine and add the given token to it.

 $ sudo adduser lava-auto

 $ sudo su - lava-auto

 echo -e "[backend]\ndefault-keyring=keyring.backend.UncryptedFileKeyring" > ~/keyringrc.cfg

* Add the generated token to the keyring:

 $ lava-tool auth-add https://lava-auto@lava.domain/RPC2/

* Exit lava-auto user

 $ exit

If everything went fine, the lava-auto user is now able to submit jobs to the production LAVA service.

Creating Test profiles

In order to submit jobs to the LAVA production service, we will use the Lava Job Create tool. Lava Job Create (a.k.a. lava-job-create and l-j-c) is a tool written in Python that uses templates to generate LAVA Job Files.

 $ sudo apt-get install lava-job-create lava-job-templates

The lava-job-templates package contains all job templates that Collabora is currently using.

 $ sudo mkdir -p /etc/lava-profiles.d/

For each type of image and testing a new profile file will be created. The following ones are currently used for SID images:

These profiles set all parameters and tests necessary to run test on each type of image and test frequency. Bellow is a definition of one of the official profiles described above:

[settings]
testcases =
  job-boot,
  job-halt

[variables]
device_type = i386
lava_rpc_server = https://lava.domain/RPC2/
lava_rpc_user = lava-auto
bundle_stream = /public/personal/lava-auto/debian-sid-i386-daily-build/
test_definitions_deb = lava-test-definitions

[deploy_parameters]
baseurl = http://images.domain/unstable/latest/
hwpack = %LATEST%
rootfs = %LATEST%
hwpack_regex = hwpack_(?P<type>debian-sid-i386-qa)_(?P<date>[0-9]+)-(?P<time>[0-9]+)_(?P<arch>[a-z-0-9]+)_supported.tar.gz
rootfs_regex = ospack_(?P<type>debian-sid-i386)_(?P<date>[0-9]+)-(?P<time>[0-9]+).tar.gz

The following is a short description of each parameter used on this file:

[settings]

[variables]

[deploy_parameters]

With a profile properly set, it's possible to run the set of jobs like this:

 $ lava-job-create debian-sid-i386-daily --submit

Configuring periodic tests runs

To run tests periodically, it's necessary to rely on a system task scheduler. For that, Collabora is currently using Cron.

To set cron to run jobs for the official images:

#
# cron-jobs for Lava Auto 
#

MAILTO=lava-auto
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/X11R6/bin"
LAVA_JOB_CREATE=/usr/bin/lava-job-create
LOG_FILE=/var/log/lava-auto.log

# Execute daily LAVA jobs.
30 6   * * *    lava-auto $LAVA_JOB_CREATE debian-sid-i386-daily --submit --log-level INFO --log-file $LOG_FILE
30 6   * * *    lava-auto $LAVA_JOB_CREATE debian-testing-i386-daily --submit --log-level INFO --log-file $LOG_FILE
30 6   * * *    lava-auto $LAVA_JOB_CREATE debian-stable-i386-daily --submit --log-level INFO --log-file $LOG_FILE

Adjust the execution frequency accordingly.