Upstream Kernel CI Project

This project deals with various aspects of upstream kernel validation - build testing most all ARM, ARM64, and X86 kernel configurations, boot testing these configurations on real and emulated platforms, and continuously reporting these results in a concise easy to consume fashion for each merge on an upstream tree.

What's New

This section will describe recent updates to the project.

Upstream Trees

The table below describes the architectures and tree/branch configuration this project intends to validate.

Tree

Branch

ARM

ARM64

X86

mainline

master

(./)

(./)

(./)

next

master

(./)

(./)

(./)

arm-soc

for-next

(./)

(./)

(./)

arm-soc

to-build

(./)

(./)

(./)

stable

linux-4.0.y

(./)

(./)

(./)

stable

linux-3.19.y

(./)

(./)

(./)

stable

linux-3.18.y

(./)

(./)

(./)

stable

linux-3.17.y

(./)

(./)

(./)

stable

linux-3.16.y

(./)

(./)

(./)

stable

linux-3.15.y

(./)

(./)

(./)

stable

linux-3.13.y

(./)

(./)

(./)

stable

linux-3.12.y

(./)

(./)

(./)

stable

linux-3.11.y

(./)

(./)

(./)

stable

linux-3.10.y

(./)

(./)

(./)

stable-queue

queue-4.0

(./)

(./)

(./)

stable-queue

queue-3.14

(./)

(./)

(./)

stable-queue

queue-3.10

(./)

(./)

(./)

omap

for-next

(./)

(./)

(./)

rmk

master

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.10

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.10-test

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.10-rt

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.10-rt-test

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.14

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.14-test

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.14-rt

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.14-rt-test

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.18

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.18-test

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.18-rt

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.18-rt-test

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.18-android

(./)

(./)

(./)

lsk

linux-linaro-lsk-v3.18-android-test

(./)

(./)

(./)

qcom-lt

integration-linux-qcomlt

(./)

(./)

(./)

samsung

for-next

(./)

(./)

(./)

mturquette

eas-next

(./)

(./)

(./)

khilman

to-build

(./)

(./)

(./)

dlezcano

kevin-bot

(./)

(./)

(./)

tbaker

to-build

(./)

(./)

(./)

collabora

for-master

(./)

(./)

(./)

collabora

for-next

(./)

(./)

(./)

collabora

for-kernelci

(./)

(./)

(./)

Upstream Kernel Configurations

The table below describes the kernel configurations this project intends to validate.

Architecture

Configurations

arm

all

arm

multi_v7_defconfig+CONFIG_ARM_LPAE=y

arm

multi_v7_defconfig+CONFIG_CPU_BIG_ENDIAN=y

arm

multi_v7_defconfig+CONFIG_PROVE_LOCKING=y

arm

multi_v7_defconfig+CONFIG_OF_UNITTEST=y

arm64

all

arm64

defconfig+CONFIG_OF_UNITTEST=y

x86

all

Upstream Platforms

The table below describes the platforms this project intends to validate. Note that the upstream platform names use the kernel device tree naming conventions where appropriate. Don't see your platform here, please e-mail the project admins here.

Board

Architecture

SoC Family

alpine-db

arm

alpine

at91-sama5d3_xplained

arm

at91

sama5d35ek

arm

at91

bcm28155-ap

arm

bcm

bcm2835-rpi

arm

bcm

da850-evm

arm

davinci

dm365evm

arm

davinci

exynos5800-peach-pi

arm

exynos

exynos5422-odroidxu3

arm

exynos

exynos5420-arndale-octa

arm

exynos

exynos5410-odroid-xu

arm

exynos

exynos5250-snow

arm

exynos

exynos5250-arndale

arm

exynos

exynos4412-odroidx2

arm

exynos

exynos4412-odroidu3

arm

exynos

hisi-x5hd2-dkb

arm

hisi

hip04-d01

arm

hisi

imx6q-wandboard

arm

imx

imx6q-sabrelite

arm

imx

imx6q-cm-fx6

arm

imx

imx6dl-wandboard,wand-dual

arm

imx

imx6dl-wandboard,wand-solo

arm

imx

armada-370-mirabox

arm

mvebu

armada-xp-openblocks-ax3-4

arm

mvebu

omap5-uevm

arm

omap

omap4-panda-es

arm

omap

omap4-panda

arm

omap

omap3-beagle-xm

arm

omap

omap3-beagle

arm

omap

omap3-n900

arm

omap

omap3-overo-tobi

arm

omap

omap3-overo-storm-tobi

arm

omap

am437x-gp-evm

arm

omap

am335x-boneblack

arm

omap

am335x-bone

arm

omap

qcom-msm8974-sony-xperia-honami

arm

qcom

qcom-apq8074-dragonboard

arm

qcom

qcom-apq8084-ifc6540

arm

qcom

qcom-apq8064-ifc6410

arm

qcom

qcom-apq8064-cm-qs600

arm

qcom

rk3288-evb-rk808

arm

rockchip

emev2-kzm9d

arm

shmobile

sun9i-a80-optimus

arm

sunxi

sun9i-a80-cubieboard4

arm

sunxi

sun7i-a20-cubieboard2

arm

sunxi

sun7i-a20-bananapi

arm

sunxi

sun7i-a20-cubietruck

arm

sunxi

sun4i-a10-cubieboard

arm

sunxi

stih410-b2120

arm

sti

tegra124-jetson-tk1

arm

tegra

tegra30-beaver

arm

tegra

ste-snowball

arm

u8500

ste-snowball

arm

u8500

versatilepb

arm

versatile

vexpress-v2p-ca15-tc1

arm

vexpress

vexpress-v2p-ca15_a7

arm

vexpress

vexpress-v2p-ca9

arm

vexpress

zynq-zc702

arm

zynq

zynq-parallella

arm

zynq

apm-mustang

arm64

none

juno

arm64

none

qemu-aarch64,legacy

arm64

none

minnowboard-max

x86

none

x86

x86

none

x86-kvm

x86

none

Architecture Overview

The information below documents the components which are used in this project.

kernel-ci-overview.png

Build System

The current build system consists of five bare metal quad-core XEON machines. Each machine is responsible for building a single kernel configuration and then publishing the build. Salt Stack is used to configure the build machines and our configuration can be found here. Typically, building and publishing a tree/branch for all architectures/configurations listed above can be achieved in approximately thirty minutes with this implementation.

Builds Scripts

These are miscellaneous scripts for multi-arch, multi-defconfig kernel builds. The scripts are driven by the Jenkins jobs described below.

Examples:

  • To build a ARM kernel using the multi_v7_defconfig, execute the following from the root of a kernel source tree.
    • export LANG=C

      export ARCH=arm

      /path/to/build-scripts/build.py -i -c multi_v7_defconfig

  • To build a ARM64 kernel using defconfig with a configuration fragment enabled, execute the following from the root of a kernel source tree.
    • export LANG=C

      export ARCH=arm64

      /path/to/build-scripts/build.py -i -c defconfig -c CONFIG_OF_UNITTEST=y

  • Once the build completes the artifacts are stored in the _install_ directory.

Jenkins

Below describes the Jenkins configurations used in the build system.

  • kernel build trigger job

    • This Jenkins multi-configuration job monitors the upstream trees listed above for changes. It invokes the kernel build job listed below if changes are found. It passes the parameters listed below to the kernel build job.

  • kernel build job

    • This Jenkins multi-configuration job does the actual work of building and publishing of the kernel. This job requires parameters to invoke the build, their descriptions are listed below.

      • ARCH_LIST - A list of architectures to be built.
      • DEFCONFIG_LIST - A list of defconfigs to be built.
      • TREE - The url for the tree to clone.
      • BRANCH - The branch for the given tree to checkout.
      • COMMIT_ID - The commit id to checkout.
      • PUBLISH - A boolean flag to instruct the system to publish the build. If this is set to TRUE then the build(s) will be published to our storage server.

  • kernel build complete job

    • This Jenkins job is ran after the kernel build job completes, and it's purpose is to relay messages about the build to the dashboard.

Publishing System

The publishing system is simply a web server serving files and directories. To publish a file, the upload API upload API is used for both builds, boots, and tests. A nested directory structure is used to organize the builds and can be found below for reference.

Examples:

Directory Structure:

  • Domain: http://storage.kernelci.org/

    • Kernel Tree: i.e. mainline, next, arm-soc, lsk, etc...

      • Git Describe: i.e. v3.19-rc4-23-g971780b70194, v3.19-rc4, etc..

        • Architecture-Defconfig+Fragment=y: i.e. arm-multi_v7_defconfig, arm64-defconfig+CONFIG_OF_UNITTEST=y

          • Kernel: i.e. zImage, Image, bzImage, vmlinux

          • Modules: This is a compressed tar ball of modules from the specific build configuration. When extracted, the results should be a directory tree of the form /lib/modules/<kernel-version>/.

          • System.map: Symbol table from the specific build configuration.

          • Kernel Configuration: The configuration used to build a specific configuration. Typically, this is named kernel.config.

          • Build Log: This is the log produce by the specific build configuration, only contains errors and warnings. Typically, this is named build.log.

          • Build JSON: This is a file encoded using JSON to describe to a kernel build to the dashboard. A detailed schema is available here. Typically, this file is named build.json.

            • Device Trees: Typically, this is a folder called 'dtbs' which contains all of the device tree blobs for a specific build configuration.

            • Lab Name: This is a directory that corresponds to a lab-id as described here. It contains the platform logs in both .txt and .html format.

Automation Frameworks

Currently, two automation systems are supported in this project.

  • LAVA

    • Web driven test automation framework with built in scheduler.
  • pyboot

    • Simple and dumb command line tool for automated booting of boards via serial console.

Dashboard

The dashboard itself is two components, a front end and a back end.

  • front end

    • Stateless web application that users typically interact with.

      frontend.png

  • back end

    • Provides the REST API and token based authentication.
      • backend.png

How To's

The information below is provided to guide you through various use cases when using the different components. A warning to the wise, as this project is still in development. Things will change, so be prepared.

LAVA Instances

Boot Testing with LAVA

The lava-ci project was created to assist in the creation, submission, and reporting of LAVA jobs. Below will describe each tool in the project and give examples for reference.

Platform Mapping Table:

LAVA device types and the upstream platform naming conventions differ, a mapping table is used to resolve the proper platform name. The LAVA device type names are used below with the lava-ci tools. Use the table below for reference.

LAVA Device Type

Upstream Platform Name

armada-370-mirabox

armada-370-mirabox

arndale

exynos5250-arndale

arndale-octa

exynos5420-arndale-octa

snow

exynos5250-snow

peach-pi

exynos5800-peach-pi

odroid-xu3

exynos5422-odroidxu3

odroid-u2

exynos4412-odroidu3

odroid-u3

exynos4412-odroidu3

odroid-x2

exynos4412-odroidx2

beaglebone-black

am335x-boneblack

beagle-xm

omap3-beagle-xm

panda-es

omap4-panda-es

panda

omap4-panda

cubieboard3

sun7i-a20-cubietruck

optimus-a80

sun9i-a80-optimus

cubieboard4

sun9i-a80-cubieboard4

hi3716cv200

hisi-x5hd2-dkb

d01

hip04-d01

imx6q-wandboard

imx6q-wandboard

utilite-pro

imx6q-cm-fx6

snowball

ste-snowball

ifc6540

qcom-apq8084-ifc6540

ifc6410

qcom-apq8064-ifc6410

sama53d

at91-sama5d3_xplained

jetson-tk1

tegra124-jetson-tk1

parallella

zynq-parallella

qemu-arm-cortex-a15

vexpress-v2p-ca15-tc1

qemu-arm-cortex-a15-a7

vexpress-v2p-ca15_a7

qemu-arm-cortex-a9

vexpress-v2p-ca9

qemu-arm

versatilepb

qemu-aarch64

qemu-aarch64

mustang

apm-mustang

juno

juno

minnowboard-max-E3825

minnowboard-max

x86

x86

kvm

x86-kvm

lava-kernel-ci-job-creator.py:

This command line tool will create LAVA boot test jobs for various architectures, and platforms.

lava-kernel-ci-job-creator.py [-h] --plans PLANS [PLANS ...] [--arch ARCH] [--targets TARGETS [TARGETS ...]] url

  • Examples:

    • Create all LAVA boot test jobs for a specific build.

      • python lava-kernel-ci-job-creator.py http://storage.kernelci.org/next/next-20150114/ --plans boot

    • Create only LAVA boot test jobs for a specific build and architecture.

      • python lava-kernel-ci-job-creator.py http://storage.kernelci.org/next/next-20150114/ --plans boot --arch arm

    • Create only LAVA boot test jobs for a specific build and targets.

      • python lava-kernel-ci-job-creator.py http://storage.kernelci.org/next/next-20150114/ --plans boot --targets mustang odroid-xu3

    The generated jobs can be found in the jobs directory.

lava-job-runner.py:

This command line tool will submit all LAVA jobs in the current working directory.

lava-job-runner.py [-h] [--stream STREAM] [--repo REPO] [--poll POLL] username token server

  • Examples:

    • Submit all LAVA jobs in the current working directory to a specific server, and bundle stream.

      • python lava-job-runner.py <username> <lava token> http://my.lavaserver.com/RPC2/ --stream /anonymous/mybundle/

    • Submit and poll all LAVA jobs in the current working directory to a specific server, bundle stream.

      • python lava-job-runner.py <username> < lava token> http://my.lavaserver.com/RPC2/ --stream /anonymous/mybundle/ --boot results/kernel-ci.json

    • Submit and poll all LAVA jobs in the current working directory to a specific server, bundle stream. Once the results have been obtained, store the results in a JSON encoded file for use later with the dashboard reporting tool.

      • python lava-job-runner.py <username> <lava token> http://my.lavaserver.com/RPC2/ --stream /anonymous/mybundle/ --boot results/kernel-ci.json --lab <lab-id> --api http://api.kernelci.org --token <dashboard token>

lava-report.py:

This command line tool will report the results of LAVA jobs given a JSON results file.

lava-report.py [-h] [--boot BOOT] [--lab LAB] [--api API] [--token TOKEN] [--email EMAIL]

  • Examples:

    • Report all results from a given JSON result file.
      • python lava-report.py --boot results/kernel-ci.json --lab <lab-id> --api http://api.kernelci.org --token <dashboard token>

    The generated results can be found in the results directory.

Boot Test with pyboot

  • TODO

Using the Dashboard

This section documents various use cases for the dashboard, providing examples for reference.

WIP

This section deals with development items that are still a work in progress.

Dashboard

  • E-mail reporting
    • Boot reports
    • Build reports
  • File upload API
    • Currently scp/rsync are being used to transfer log files, would like a better interface for this.

LAVA

  • ARM KVM boot testing
    • Working concept, will need dashboard support.
  • In kernel test results
    • CONFIG_OF_UNITTESTS
    • Locking Validation
  • Big endian boot testing
  • Keystone II support
  • Define basic test plan

pyboot

  • TODO

Future Plans

This section details out future work items for the project

Dashboard

  • Receiving, displaying, and reporting test results

LAVA

  • Automated bisection tool
  • Ramdisk testing
    • cyclictest-basic
    • lmbench
    • ltp
    • ltp-realtime
    • kselftest

pyboot

  • TODO

What could we ask the backend?

* When did this warning messages start appearing?

ProductTechnology/kernelci.org (last modified 2016-10-28 14:55:56)