Differences between revisions 2 and 3
Revision 2 as of 2013-03-05 09:02:24
Size: 25447
Editor: HectorOron
Comment:
Revision 3 as of 2013-05-22 14:54:24
Size: 25513
Editor: NeilWilliams
Comment:
Deletions are marked like this. Additions are marked like this.
Line 6: Line 6:

See also: https://wiki.linaro.org/Platform/LAVA/LAVA_packaging

NOTE: Work in progress. LAVA is not yet ready for Debian.

See also: https://wiki.linaro.org/Platform/LAVA/LAVA_packaging

Introduction

This document describes the steps required to deploy a LAVA production instance for the purpose of duplicating Automated Testing Service.

Assumptions

The following knowledge is assumed for this document and is thus out of scope:

  • General network administration skills
  • General Linux system administration capabilities and in particular:
    • Ability to install and maintain Debian server, both on real machines and virtualized hardware
    • Ability to configure Apache2 virtual hosts

The following hardware and network is assumed to be available:

  • Dedicated network for the build infrastructure which provides the following features:
    • DHCP server to provide static IP addresses for named machines
    • DNS server that handles the .domain domain and resolves <service>.domain to the correct IP addresses

    • DNS server that handles the .public.domain.net domain and resolves <service>.public.domain.net to the correct IP addresses

  • Hardware suitable for the task
    • One (virtual) machine running Debian server for the LAVA main server (DNS name: vm0.public.domain.net, lava.public.domain.net, lava.domain)

    • One (virtual) machine running LAVA Master Image for test running (DNS names: vm1.vrns)

Installation of LAVA main server

The LAVA main server is where the automated testing service is hosted. It is responsible for downloading the right image going to be tested, controlling the test run and providing a web user interface for viewing test results and service administration.

Deployment

  • Ensure that Postgresql is installed and running:

 $ sudo apt-get install postgresql postgresql-9.1
  • Install LAVA service packages

 $ sudo apt-get install lava-dashboard lava-dispatcher lava-scheduler lava-scheduler-tool lava-server lava-tool linaro-image-tools

Setting up Apache server

In order to get the LAVA web view working, it's necessary to configure and enable the server's HTTP service.

  • Although LAVA packages should have installed Apache2 package, check if it's installed and running:

 $ sudo apt-get install apache2
  • Edit /etc/apache2/sites-available/lava:
    • Replace all LAVADOMAIN references with the hostname going to be used. In this case: lava.public.domain.net

    • Point both SSL parameters, SSLCertificateFile and SSLCertificateKeyFile, to the correct certificate path.

The result should be a file similar to this:

<VirtualHost *:80>
    ServerAdmin webmaster@public.domain.net
    ServerName lava.public.domain.net

    Redirect permanent / https://lava.public.domain.net

    # This is a small directory with just the index.html file that tells
    # users about this instance and has a link to application pages
    DocumentRoot /var/lib/lava/instances/lava/var/www/lava-server
</VirtualHost>

<VirtualHost *:443>
    ServerAdmin webmaster@public.domain.net
    ServerName lava.public.domain.net

    # A self-signed (snakeoil) certificate can be created by installing
    # the ssl-cert package. See
    # /usr/share/doc/apache2.2-common/README.Debian.gz for more info.
    # If both key and certificate are stored in the same file, only the
    # SSLCertificateFile directive is needed.
    SSLEngine On
    SSLCertificateFile /etc/ssl/certs/vm0.public.domain.net-http.pem
    SSLCertificateKeyFile /etc/ssl/private/vm0.public.domain.net-http.pem

    # Allow serving media, static and other custom files
    <Directory /var/lib/lava/instances/lava/var/www>
        Options FollowSymLinks
        AllowOverride None
        Order allow,deny
        allow from all
    </Directory>

    # This is a small directory with just the index.html file that tells users
    # about this instance and has a link to application pages
    DocumentRoot /var/lib/lava/instances/lava/var/www/lava-server

    # These two alias avoid processing images and static content via FastCGI.
    Alias /static /var/lib/lava/instances/lava/var/www/lava-server/static
    Alias /images /var/lib/lava/instances/lava/var/www/lava-server/images

    # uWSGI mount point. For this to work the uWSGI module needs be loaded.
    # XXX: Perhaps we should just load it ourselves here, dunno.
    #<Location />
    #    SetHandler  uwsgi-handler
    #    uWSGISocket /srv/lava/instances/lava/run/uwsgi.sock
    #</Location>

    # FastCGI mount point. For this to work the FastCGI module needs be loaded.
    FastCGIExternalServer fcgi -socket /var/run/lava-server-fcgi.sock -pass-header Authorization
    # Redirect all requests to the FastCGI socket.
    Alias / fcgi/

    # Make exceptions for static and media.
    # This allows Apache to serve those and offload the application server
    <Location /static>
        SetHandler  none
    </Location>
    # We don't need media files as those are private in our implementation

    # images folder for lava-dispatcher tarballs
    <Location /images>
        SetHandler  none
    </Location>
</VirtualHost>

* Enable the LAVA production site:

 $ sudo a2ensite lava

If everything went fine, now it's possible to access the LAVA main web view through: https://lava.public.domain.net or http://lava.domain

Setting up the LAVA Dispatcher

The LAVA Dispatcher component is responsible for executing the test run by controlling the target devices or slaves.

  • Copy the default settings to LAVA Dispatcher configuration directory:

 $ sudo cp -f /usr/share/pyshared/lava_dispatcher/default-config/lava-dispatcher/device-defaults.conf /etc/xdg/lava-dispatcher/
  • Edit /etc/xdg/lava-dispatcher/lava-dispatcher.conf:

    • Add a LAVA_SERVER_IP with the current machine IP address
    • Change LAVA_IMAGE_URL to use the machine domain. In this case, https://lava.public.domain.net/images

    • If proxy connection is needed set LAVA_PROXY, otherwise leave it commented out.
    • Set LAVA_TEST_DEB="lava-test" if it's not already

The following is an example of what a lava-dispatcher.conf file should look like:

LAVA_SERVER_IP = 192.168.101.12

# Location for rootfs/boot tarballs extracted from images
LAVA_IMAGE_TMPDIR=/var/lib/lava/instances/lava/var/www/lava-server/images

# URL where LAVA_IMAGE_TMPDIR can be accessed remotely
LAVA_IMAGE_URL= https://lava.public.domain.net/images

# Location on the device for storing test results.
LAVA_RESULT_DIR=/var/lib/lava/instances/lava/tmp

# Location for caching downloaded artifacts such as hwpacks and images
LAVA_CACHEDIR=/var/lib/lava/instances/lava/var/cache/lava-dispatcher

# This is the address and port of cache proxy service; format is like:
# LAVA_PROXY = http://192.168.1.10:3128/

# This url points to the version of lava-test to be installed with pip
#LAVA_TEST_URL = bzr+http://bazaar.launchpad.net/~le-chi-thu/lava-test/enabled-file-cache/#egg=lava-test

LAVA_TEST_DEB="lava-test"

# Python logging level to use
# # 10 = DEBUG
# # 20 = INFO
# # 30 = WARNING
# # 40 = ERROR
# # Messages with a lower number than LOGGING_LEVEL will be suppressed
# LOGGING_LEVEL = 10

To allow LAVA Dispatcher component to execute the tests, it's necessary to add to it target or slave devices. These devices are the machines going to run the test for real.

In order to add a target device, it's necessary to specify two things: The device type and settings.

  • Create a new device type in /etc/xdg/lava-dispatcher/device-types/i386.conf:

interrupt_boot_prompt = The highlighted entry will be executed automatically

interrupt_boot_command = c

image_boot_msg = Initializing cgroup subsys cpuset

boot_cmds = search --set=root --label testboot,
    linux /vmlinuz root=LABEL=testrootfs ro "console=ttyS0,115200n8" "elevator=cfq",
    initrd /initrd.img,
    boot

bootloader_prompt = grub>

This tells the dispatcher how to properly boot images which will run on this type of target device. The following is a short description of each parameter used in this file:

  • interrupt_boot_prompt: Identify which string to watch in order to stop the image boot process

  • interrupt_boot_command: Specify the string or character used to stop the image boot process

  • image_boot_msg: Specify which string defines that the image is currently booting

  • boot_cmds: Sequence of command strings used to boot the image going to be tested

  • bootloader_prompt: String to watch in order to know that the bootloader is ready to accept the boot commands

  • Create a new device in /etc/xdg/lava-dispatcher/devices/i386-vm-2.conf:

device_type = i386

connection_command = slogin -t lavaconsole@dom0.public.domain.net -i /root/lava_identity console prato
hard_reset_command = slogin  lavaconsole@dom0.public.domain.net -i /root/lava_identity hard-reset prato

# Test image recognization string
TESTER_STR = root@vm1
tester_hostname = vm1

The following is a short description of each parameter used in this file:

  • device_type: The type of this device. It must be one of the ones already specified inside the device-types/ directory

  • connection_command: Command to use in order to connect to the target device via serial connection

  • hard_reset_command: Command to use when the target device need to be hard reset. It is normally used when the given device assume a non-expected state and cannot be accessed anymore

  • TESTER_STR: String used to identify that the target device is completely booted with the image going to run the tests

  • tester_hostname: Specify the target device hostname. Ideally it should be the same as the machine DNS name

It's important to note that the commands used above for connection_command and hard_reset_command parameters are very tied to the Collabora's server internal infrastructure which is not covered in this document.

Creating LAVA Target devices or Slaves

The LAVA target device is a real machine, board or VM that boots the image to be tested and run the necessary tests. Target devices can be considered workers or slaves.

Following are instructions to setup a i386 virtual machine called vm1 as a LAVA target device. XXX Place holder. Need to document how to create images.

Next, boot the new virtual machine and access its console shell. Now it's necessary to create some extra disk partitions on the device:

  • Use fdisk to create new partitions:

 $ sudo fdisk -S 63 -H 255 -c /dev/vda  # Use your VM hard drive name in place of /dev/vda
  • Printing the current partition table:

Command (m for help): p

Disk /dev/vda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *          63      270334      135136    c  W95 FAT32 (LBA)
/dev/vda2          270336     2097151      913408   83  Linux
  • Create a new extended partition:

Command (m for help): n
Partition type:
   p   primary (2 primary, 0 extended, 2 free)
   e   extended
Select (default p): e
Partition number (1-4, default 3): 3
First sector (270335-16777215, default 270335): 2097152
Last sector, +sectors or +size{K,M,G} (2097152-16777215, default 16777215): 
Using default value 16777215
  • Create a new logical partition with 128M:

Command (m for help): n
Partition type:
   p   primary (2 primary, 1 extended, 1 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 5
First sector (2099200-16777215, default 2099200): 
Using default value 2099200
Last sector, +sectors or +size{K,M,G} (2099200-16777215, default 16777215): +128M
  • Create a new logical partition using the rest of the available space:

Command (m for help): n
Partition type:
   p   primary (2 primary, 1 extended, 1 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 6
First sector (2363392-16777215, default 2363392): 
Using default value 2363392
Last sector, +sectors or +size{K,M,G} (2363392-16777215, default 16777215): 
Using default value 16777215
  • Write the partition table

Command (m for help): w
The partition table has been altered!

In order to see the new partition table, reboot the target device:

 $ sudo reboot

Now it's necessary to format the newly-created partitions:

 $ sudo mkfs.vfat /dev/vda5 -n testboot
 $ sudo mkfs.ext3 -q /dev/vda6 -L testrootfs

Change the target device hostname to 'master':

 $ sudo echo 'master' > /etc/hostname

Increase Grub boot timeout to 15 seconds:

 $ sudo sed -i 's/timeout=[0-9\-]*/timeout=15/' /boot/grub/grub.cfg

Reboot again and the target device should be ready to be used:

 $ sudo reboot

Configuring LAVA service

The LAVA service web view provides a set of options that can be configured via an administration panel. Through this panel, it's possible to configure user access, where test results should be stored, reports, devices going to be used, etc. Thus, it's important to keep in mind that via this administration panel it's possible to chance some very important part of the LAVA service, making it a very critical place and only administrators should have permission to access it.

At this point it's necessary to create an admin user for LAVA:

 $ sudo lava-server manage createsuperuser
 Username (Leave blank to use 'root'): lava-admin
 E-mail address: lava-admin@public.domain.net
 Password: 
 Password (again): 
 Superuser created successfully.

With the user created, go to the top right corner of the LAVA web view and log in the system. Once logged in, access the Administration panel clicking on the top right corner again.

Creating Users and Groups

[[File:QA-LAVA-Create-group.png|300px|thumb|right|LAVA - Create group screen]] [[File:QA-LAVA-Create-lava-auto.png|200px|thumb|right|LAVA - Create lava-auto user screen]]

Although guest users may be granted partial access to the LAVA web view, it's important to create users that are going to use the service and also grant them the necessary permissions according to their roles.

Create a normal user group that gives permission to view and submit jobs, and change their own user profile:

  • On Admin panel click on: Auth and then Groups Add

    • Name: Collabora

    • Permissions:

      • auth | user | Can change user
      • lava_scheduler_app | test job | Can add test job
    • Click on Save

Create a user that will be in charge of automatically submitting periodically jobs:

  • On Admin panel click on: Auth and then Users Add

    • Username: lava-auto

    • Set the password
    • Click on Save and continue editing

  • Extended user settings:
    • First name: Lava Bot

    • Email: lavabot@public.domain.net

    • Active: Checked

    • Staff status: Unchecked

    • Superuser status: Unchecked

    • Groups: Collabora

    • Click on Save

It's also recommended that anyone capable of submitting jobs to the LAVA service have a LAVA user setup the same as above for the lava-auto user.

Creating Bundle Streams

Bundle Streams are places to store specific test results for further analysis. They can be thought of as a sort of directory. Thus, it's important to create well defined bundle streams that will store each type of test result, depending on the best way to filter and store them.

On Collabora's setup there're two main types of bundle streams:

# For each image release and flavor a bundle stream is created. Depending on the frequency the test is executed another bundle stream is also created. Examples:
#:* /public/personal/lava-auto/debian-sid-i386-daily-build/
#:* /public/personal/lava-auto/debian-testing-i386-daily-build/
#:* /public/personal/lava-auto/debian-stable-i386-daily-build/
# Each normal LAVA user who will submit jobs should have their own personal bundle stream. There, the user can submit their test jobs and experiment without pushing potentially-faulty jobs to the official Bundle Streams which evaluates the images periodically. Example:
#:* /public/personal/new_user_foo/
#:* /public/personal/new_user_bar/

[[File:QA-LAVA-Create-bundle-stream.png|300px|thumb|right|Create bundle stream screen]]

To create one of the official bundle streams described above:

  • On Admin panel click on: Dashboard_App and then Bundle streams Add

    • Name: Debian SID i386 - Daily Build

    • Slug: Leave it to be auto generated

    • User: Lava-auto

    • Group: none

    • Is public: Checked

    • Is anonymous: Unchecked

    • Click on Save

The procedure above has to be done for each needed official bundle stream. After that, the lava-auto user will be able to store test results on these bundle streams.

To create a personal user bundle stream as described above:

  • On Admin panel click on: Dashboard_App and then Bundle streams Add

    • Name: New_User_Foo's personal stream

    • Slug: Leave it to be auto generated and then delete the content, leaving it empty

    • User: New_User_Foo

    • Group: none

    • Is public: Checked

    • Is anonymous: Unchecked

    • Click on Save

At this point, New_User_Foo will be able to store test results on their personal bundle stream.

Adding target devices

[[File:QA-LAVA-Create-device-type.png|300px|thumb|right|Create device type screen]] [[File:QA-LAVA-Create-device.png|240px|thumb|right|Create device screen]]

In order to control the target devices configured on the Setting up LAVA Dispatcher section, it's necessary to add them in the Administration Panel.

Configuring the device type:

  • On Admin panel click on: Lava_Scheduler_App and then Device types Add

    • Name: i386

    • Health check job: Empty

    • Use celery: Unchecked

    • Click on Save

Note, the device type name needs to be exactly the same name used when configuring the LAVA Dispatcher component.

Configuring the device target:

  • On Admin panel click on: Lava_Scheduler_App and then Device Add

    • Hostname: i386-vm-2

    • Device type: i386

    • Leave all other fields as they are
  • Click on Save

The device hostname also needs to be exactly the same file name (without the file extension) used when configuring the LAVA Dispatcher component.

Setting up daily testing

Using some LAVA tools and Collabora scripts, it's possible to configure a machine to periodically submit jobs (or tests) to the LAVA production service. With that, it is possible to configure a set of tests to be executed daily, weekly, monthly, etc.

It is assumed the machine used to submit periodical tests is the same machine that hosts the production LAVA service.

Authenticating user on LAVA web service

In order to submit jobs to a LAVA service, it's necessary to log in with the service web view. For periodical testing, the lava-auto user is going to be used.

  • Authenticate on the service web view as lava-auto user
  • Click on top menu API and then Authentication Tokens

    • Click on Create new token

      • Description: Token used by lava-auto user to submit tests periodically.
      • Click on Save

    • Click on Display token / secret

      • Copy token / secret to the clipboard

At this point the lava-auto token is already created. Now it's necessary to create a new lava-auto user on the server machine and add the given token to it.

  • Create a regular Linux user for lava-auto:

 $ sudo adduser lava-auto
  • Switch to the lava-auto user

 $ sudo su - lava-auto
  • Before storing the token inside the keyring, create a configuration file specifying the use of the simpler keyring storage backend:

 echo -e "[backend]\ndefault-keyring=keyring.backend.UncryptedFileKeyring" > ~/keyringrc.cfg

* Add the generated token to the keyring:

 $ lava-tool auth-add https://lava-auto@lava.domain/RPC2/

* Exit lava-auto user

 $ exit

If everything went fine, the lava-auto user is now able to submit jobs to the production LAVA service.

Creating Test profiles

In order to submit jobs to the LAVA production service, we will use the Lava Job Create tool. Lava Job Create (a.k.a. lava-job-create and l-j-c) is a tool written in Python that uses templates to generate LAVA Job Files.

  • Ensure that the lava-job-create tool is installed:

 $ sudo apt-get install lava-job-create lava-job-templates

The lava-job-templates package contains all job templates that Collabora is currently using.

  • Make sure the /etc/lava-profiles.d directory exists:

 $ sudo mkdir -p /etc/lava-profiles.d/

For each type of image and testing a new profile file will be created. The following ones are currently used for SID images:

  • debian-sid-i386-daily.conf

  • debian-testing-i386-daily.conf

  • debian-stable-i386-daily.conf

These profiles set all parameters and tests necessary to run test on each type of image and test frequency. Bellow is a definition of one of the official profiles described above:

  • Edit /etc/lava-profiles.d/debian-sid-i386-daily.conf

[settings]
testcases =
  job-boot,
  job-halt

[variables]
device_type = i386
lava_rpc_server = https://lava.domain/RPC2/
lava_rpc_user = lava-auto
bundle_stream = /public/personal/lava-auto/debian-sid-i386-daily-build/
test_definitions_deb = lava-test-definitions

[deploy_parameters]
baseurl = http://images.domain/unstable/latest/
hwpack = %LATEST%
rootfs = %LATEST%
hwpack_regex = hwpack_(?P<type>debian-sid-i386-qa)_(?P<date>[0-9]+)-(?P<time>[0-9]+)_(?P<arch>[a-z-0-9]+)_supported.tar.gz
rootfs_regex = ospack_(?P<type>debian-sid-i386)_(?P<date>[0-9]+)-(?P<time>[0-9]+).tar.gz

The following is a short description of each parameter used on this file:

[settings]

  • testcases: Specify which jobs/test cases will run on this type of image. Note, the list above is just a snapshot of what is currently being used, but it can change at any time

[variables]

  • device_type: The type of device that should run the given image to be tested

  • lava_rpc_server: The LAVA service RPC2 url

  • lava_rpc_user: The LAVA user to be used to run the specified tests. It might be an user already authenticated (token) on the web view

  • bundle_stream: Where the test results should go

  • test_definitions_deb: The deb package(s) names that contains the test definitions and should be installed on the image to be tested

[deploy_parameters]

  • baseurl: URL where the script should look for the latest images

  • hwpack: The hwpack path/url that is going to be used. In case %LATEST% is set, the latest version will be picked

  • rootfs: The rootfs path/url that is going to be used. In case %LATEST% is set, the latest version will be picked

  • hwpack_regex: Expression that defines how to pick the latest version of the hwpack file

  • rootfs_regex: Expression that defines how to pick the latest version of the rootfs file

With a profile properly set, it's possible to run the set of jobs like this:

 $ lava-job-create debian-sid-i386-daily --submit

Configuring periodic tests runs

To run tests periodically, it's necessary to rely on a system task scheduler. For that, Collabora is currently using Cron.

To set cron to run jobs for the official images:

  • Edit /etc/cron.d/lava-auto:

#
# cron-jobs for Lava Auto 
#

MAILTO=lava-auto
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/X11R6/bin"
LAVA_JOB_CREATE=/usr/bin/lava-job-create
LOG_FILE=/var/log/lava-auto.log

# Execute daily LAVA jobs.
30 6   * * *    lava-auto $LAVA_JOB_CREATE debian-sid-i386-daily --submit --log-level INFO --log-file $LOG_FILE
30 6   * * *    lava-auto $LAVA_JOB_CREATE debian-testing-i386-daily --submit --log-level INFO --log-file $LOG_FILE
30 6   * * *    lava-auto $LAVA_JOB_CREATE debian-stable-i386-daily --submit --log-level INFO --log-file $LOG_FILE

Adjust the execution frequency accordingly.