Coder Social home page Coder Social logo

kernelci-core's Introduction

KernelCI project logo

Welcome to KernelCI

The KernelCI project is dedicated to testing the upstream Linux kernel. Its mission statement is defined as follows:

To ensure the quality, stability and long-term maintenance of the Linux kernel by maintaining an open ecosystem around test automation practices and principles.

The main instance of KernelCI is available on kernelci.org.

There is also a separate instance used for KernelCI development available on staging.kernelci.org, see Development workflow for all the details about it.

This repository provides core functions to monitor upstream Linux kernel branches, build many kernel variants, run tests, run bisections and schedule email reports.

It is also possible to set up an independent instance to build any arbitrary kernel and run any arbitrary tests.

You can find some more general information about the KernelCI project on the website.

User guide

KernelCI users will typically want to add their kernel branch to be monitored, connect their lab or send results from their own existing CI system. The pages below are a work-in-progress to cover all these topics:

Command line tools

All the steps of the KernelCI pipeline are implemented with portable command line tools. They are used in Jenkins pipeline jobs for kernelci.org, but can also be run by hand in a shell or integrated with any CI environment. The kernelci/build-base Docker image comes with all the dependencies needed.

The available command line tools are:

  • kci_build to get the kernel source code, create a config file, build kernels and push them to a storage server.

  • kci_test to generate and submit test definitions in an automated test lab.

  • kci_rootfs to build a CPU specific rootfs image for given OS variant and push them to a storage server.

Other command line tools are being worked on to replace the current legacy implementation which is still tied to Jenkins or hard-coded in shell scripts:

  • kci_data (WIP) to submit KernelCI data to a database and retrieve it.

  • kci_bisect (WIP) to run KernelCI automated bisections.

  • kci_email (WIP) to generate an email report with test results.

The command line tools can make use of an optional settings file with user-specific options. These settings provide default values for any of the command line arguments, as a convenience but also to avoid providing secrets such as API tokens in clear. The file uses sections for each command line tool and also for each component (i.e. each lab, backend...).

See the kernelci.conf.sample sample config file and the user settings file section for more details about how this works.

YAML Configuration files

All the builds are configured in build-configs.yaml, with the list of branches to monitor and which kernel variants to build for each of them.

Then all the tests are configured in test-configs.yaml with the list of devices, test suites and which tests to run on which devices.

Details for the format of these files can be found on the documentation pages for build configurations and test configurations.

Python package on PyPI

The kernelci package on PyPI contains all the modules from the kernelci directory as well as the kci_* command line tools. This provides the core functions of KernelCI, to parse YAML configuration and perform each step of the pipeline such as building kernels, running tests and sending results to a database.

Dockerfiles

Each step of the KernelCI Pipeline can be run in a Docker container. On kernelci.org, this is done in Jenkins jobs. The Docker images used by these containers are built from jenkins/dockerfiles and pushed to the kernelci Docker repositories.

Test templates

The majority of kernelci.org tests get run in LAVA, although this is not a requirement. Each LAVA test is generated using template files which can be found in the templates directory.

kernelci-core's People

Contributors

10ne1 avatar a-wai avatar aistcv avatar alexandrasp avatar aliceinwire avatar broonie avatar danrue avatar dependabot[bot] avatar embeddedandroid avatar evdenis avatar gctucker avatar hardboprobot avatar jenysadadia avatar khilman avatar laura-nao avatar mattface avatar mgrzeschik avatar montjoie avatar musamaanjum avatar nfraprado avatar nkbelin avatar nuclearcat avatar ojeda avatar patersonc avatar pawiecz avatar qschulz avatar roxell avatar sjoerdsimons avatar staging-kernelci-org avatar tomeuv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kernelci-core's Issues

kci_build: pass tokens other than via cmdline

When using kci_build from CI/automation environments (e.g. jenkins/gitlab/k8s), it's useful to see the kci_build commands in the logs for debug purposes. However, the need to pass the token on the command-line can make it complicated to be sure that it doesn't show up in the logs.

e.g. for jenkins you can configure secrets, and jenkins can hide these secrets in the log. But if you're building containers with kci_build commands to run in Kubernetes, you it's more difficult to hide the commandline from the output logs (or container environment descriptions)

Fix IPv6 Docker support to access lab-pengutronix

Now that all Jenkins jobs are running within Docker, there appears to be an issue with routing IPv6 traffic to external servers. This is an issue in particular with lab-pengutronix which supports IPv6 only:
https://hekla.openlab.pengutronix.de/

The issue appears to be in the configuration of the builder itself rather than in the Docker container. Routing IPv6 from a Docker container also seems to be a non-trivial thing to configure, it probably needs someone to spend a couple of hours to get this set up correctly and add it to the Ansible builder config.

Please remove non-working builds from android-3.18

None of the builds below are supported. Please prevent them from being initiated.

allnoconfig ‐ arc
bigsur_defconfig ‐ mips
fpga_defconfig ‐ arc
fpga_noramfs_defconfig ‐ arc
nlm_xlr_defconfig ‐ mips
sb1250_swarm_defconfig ‐ mips
sead3micro_defconfig ‐ mips
tinyconfig ‐ arc

clang builds don't enable additional config options

It appears that clang builds for configurations which enable additional configuration options don't actually enable those options. For example:

https://storage.kernelci.org/next/master/next-20190807/arm64/defconfig+CONFIG_CPU_BIG_ENDIAN=y/clang-8/build.log

at no point enables CONFIG_CPU_BIG_ENDIAN resulting in a config:

https://storage.kernelci.org/next/master/next-20190807/arm64/defconfig+CONFIG_CPU_BIG_ENDIAN=y/clang-8/kernel.config

where CPU_BIG_ENDIAN is not set. In contrast the equivalent GCC build does have it set in the generated config. I'm not seeing any visible difference in the start of the build logs.

@mattface

Migrate kernelci-core to Python 3

The kernelci-core project is currently implemented mostly in Python 2.x. To migrate it to Python 3, the plan is to:

  • write command line tools and modules in the kernelci package in Python 2.7
  • make these new files easy to migrate to Python 3
  • update Jenkins jobs to use the new scripts
  • delete old scripts
  • migrate any remaining Python 2.x code that may still be found anywhere in this repo

Add branch testing from linux-pm.git at kernel.org

Each git kernel branch is monitored every hour by kernelci.org. Whenever a new
revision is detected, it will be built for a number of combinations of
architectures, defconfigs and compilers. Then a build report will be sent,
some tests will be run and test reports will also be sent.

Please provide the information described below in order to add a new branch to
kernelci.org:

  • Which Git branch do you want to add?

⇨ Git repo URL: git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git

⇨ Git branch name: testing

  • How much build coverage do you need for your branch?

Generally speaking, a good rule is to build fewer variants for branches that
are "further" away from mainline and closer to individual developers. This can
be fine-tuned with arbitrary filters, but essentially there are 3 main options:

  1. Build everything, including allmodconfig, which amounts to about 220 builds.
    This is we do with linux-next.

  2. Skip a few things such as allmodconfig as it's very long to build and
    doesn't really boot, and also architectures that are less useful such as MIPS
    which saves 80 builds and doesn't have many test platforms in KernelCI. This
    is we do with some subsystems such as linux-media.

  3. Build only the main defconfig for each architecture to save a lot of build
    power, get the fastest results and highest boots/builds ratio. This is what do
    with some maintainer branches such as linusw' GPIO branch.

⇨ Build coverage choice: 3

  • How often do you expect this branch to be updated?

If you push once a week or less, it's easier to allow for many build variants
as this reduces the build load on average. Conversely, if you push several
times every day then a small set of builds should be used.

It's also possible to increase the build capacity if needed but this comes with
a cost. Avoiding unnecessary builds is always a good way to reduce turnaround
time and not waste resources.

⇨ Estimated frequency: Once a day

  • Who should the email reports be sent to?

Standard email reports inlude builds and basic tests that are run on all
platforms. Please provide a list of email recipients for these. Typical ones
are the regular KernelCI reports list, kernel mailing lists associated with the
changes going into the branch and related maintainers.

⇨ Recipients: [email protected], [email protected]

New kernelci Python package on pypi.com

The KernelCI core tools are currently mainly located in the kernelci/kernelci-core repository. However, other files such as YAML configuration, Docker images, LAVA job templates and Jenkins job files are also in that same repository. There is also the kernelci/kcidb repository with tools to communicate with the BigQuery database.

In order to make it possible for users to install the KernelCI tools from a single Python package and provide it as a pip3 package, we need to provide a setup.py script which will include only the main Python packages and command line tools. This will also explicitly list the dependencies.

The proposed solution is as follows:

Generic lab support

Automated test labs are a key part of KernelCI. The focus has been put mostly on LAVA until now, and it is still the preferred option when setting up new labs. However, there are many other types of labs with a lot of devices and test definitions written in a different format which could contribute to KernelCI.

One approach is to implement the kernelci.lab.LabAPI class to then add support for a new type of lab. This can be done in particular when the lab supports a remote API with a job definition format.

For more ad-hoc labs or if there is no publicly available API, a generic way of getting notifications for new tests to run and to send test results would make it possible to integrate them as much as possible.

  • Generic subscription mechanism to get notified of new tests to run as a client
  • Generic way to send test results using kci_data

Add queue branches from stable

Each git kernel branch is monitored every hour by kernelci.org. Whenever a new
revision is detected, it will be built for a number of combinations of
architectures, defconfigs and compilers. Then a build report will be sent,
some tests will be run and test reports will also be sent.

Please provide the information described below in order to add a new branch to
kernelci.org:

  • Which Git branch do you want to add?

⇨ Git repo URL: git://git.kernel.org/pub/scm/linux/kernel/git/stable/stable-queue.git

⇨ Git branch name: queue/*

  • How much build coverage do you need for your branch?

Generally speaking, a good rule is to build fewer variants for branches that
are "further" away from mainline and closer to individual developers. This can
be fine-tuned with arbitrary filters, but essentially there are 3 main options:

  1. Build everything, including allmodconfig, which amounts to about 220 builds.
    This is we do with linux-next.

  2. Skip a few things such as allmodconfig as it's very long to build and
    doesn't really boot, and also architectures that are less useful such as MIPS
    which saves 80 builds and doesn't have many test platforms in KernelCI. This
    is we do with some subsystems such as linux-media.

  3. Build only the main defconfig for each architecture to save a lot of build
    power, get the fastest results and highest boots/builds ratio. This is what do
    with some maintainer branches such as linusw' GPIO branch.

⇨ Build coverage choice: Same as what is currently done with stable-rc (options #1?)

  • How often do you expect this branch to be updated?

If you push once a week or less, it's easier to allow for many build variants
as this reduces the build load on average. Conversely, if you push several
times every day then a small set of builds should be used.

It's also possible to increase the build capacity if needed but this comes with
a cost. Avoiding unnecessary builds is always a good way to reduce turnaround
time and not waste resources.

⇨ Estimated frequency: Possibly few times each day.

  • Who should the email reports be sent to?

Standard email reports inlude builds and basic tests that are run on all
platforms. Please provide a list of email recipients for these. Typical ones
are the regular KernelCI reports list, kernel mailing lists associated with the
changes going into the branch and related maintainers.

⇨ Recipients: Same as the recipient list of the stable-rc mails.

Add video4linux2 memory-to-memory basic test

From a maintainer point of view, it would be really interesting to be able to test some memory-to-memory video4linux2 drivers. Currently, simply making sure gstreamer is able to find the drivers and instantiate the proper filters from them would be a good start. From that, it would be really easy to add functional test that could exercise the kernel in a decent way.

Motivation

The v4l2-compliance test currently does not cover memory-to-memory fully, but more importantly because gstreamer is a major use case of memory-to-memory devices, and the plumbing between the kernel and gstreamer is far from trivial.

What's needed

Debian gstreamer is needed. The v4l2 plugins are in gst-plugins-good, and 1.15.x is needed. Currently, this means either unstable or experimental. E.g.:

apt-get -y -t experimental install libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-tools

Tests

A first test to make gstreamer is detecting would go like this:

rm .cache/gstreamer-1.0/registry.x86_64.bin && gst-inspect-1.0 video4linux2

and then some grep magic on top.

With this in place, we can then add support to test decoders having some reference videos, decode them and compare the result.

Where

It would be nice if we can restrict this to arm/arm64 and only the specific set of boards that have these type of devices. I think we'd be running this during the -rc phase, mostly to catch regressions introduced during the merge window.

It would also be interesting to be able to track a few selected contributors/maintainers, so we can run this when patches are posted, as a condition for merging them.

LAVA job links

For debugging issues in labs it's sometimes helpful to be able to trace things through to the specific LAVA job in the relevant lab (for confirming exactly what board something got run on for example) but currently we don't record much about the jobs or make anything (eg, a link) available in the UI to support this tracing. It'd be good to add this, we get the LAVA job description in the callback so we could record it easily enough and then let the UI pull things up.

@mattface @gctucker

unique ID for Reported-by tag

When reporting bisection bugs/regressions, we should add a unique ID to the Reported-by tag so that issues raised can be tracked.

e.g.

Reported-by: kernelCI bot [email protected] #

This would allow us to check/verify when reported fixes have made it upstream.

kernelci.org metrics

A lot of data is being produced by kernelci.org and many reports are being sent to the community. However, it's hard to tell at the moment how well the project is performing or the impact it has on the Linux kernel code quality. As such, some metrics should be put in place to be able to measure various aspects and produce some regular reports.

Mailing list discussion: https://groups.io/g/kernelci/topic/kernelci_org_metrics_call_for/71211464

Objectives:

  • Define the main goal of kernelci.org and derive from it what could be used to measure its success
    • Reduce number of bugs in mainline and stable kernel releases
  • List all the existing data that can be used as metrics
    • Number of Reported-by: kernelci.org bot <[email protected]>
    • Number of individual fixes in each kernel release cycle
    • Qualitative test coverage: variety of platforms and test suites run
    • Qualitative result: describe some particular cases where kernelci.org helped avoid a problem
  • Consider what else can be done to generate more useful metrics
    • Measure test coverage of the kernel code base

Use kernelci Python package in Docker images on staging

As a follow-up task from #400, to start using this on staging.kernelci.org:

  • #1320
  • update the kernelci/build-base Docker image to install the kernelci package from the git repo
  • update kernelci-deploy tools to rebuild the Docker images used on staging with a layer for the kernelci Python package

Build doesn't verify that setting config options worked

Nothing in the build code verifies that a .config that is generated from a config with additional options set actually sets those options, resulting in silent failures like those we see with clang and defconfig+CPU_BIG_ENDIAN. It would cost very little to grep the resulting config for these options and save a bunch of pain when things go wrong.

Request API tokens for @jluebbe

KernelCI uses kernelci-backend to manage its database which contains all the data about builds and tests. There are 2 main instances, a production one for kernelci.org and a test one for staging.kernelci.org. Separate tokens can be provided for either or both, with several permissions to choose from. KernelCI labs will also typically need a token to be able to push their test results.

Please answer the questions below to request some API tokens:

Contact details

⇨ User name: jluebbe

⇨ Email address: [email protected]

If this is for a lab token:

⇨ Lab owner first and last names: none (development only)

⇨ Lab name: none (development only)

Production

The production instance is the one behind https://kernelci.org. Production tokens are only provided for labs that are able to send useful data, or with read-only permissions to create dashboards or consume the results data in any way (stats, reports...). Uses of the kernelci.org production data should ideally be made public.

The URL of the production API server is https://api.kernelci.org.

Do you need a token to access the production API? If so, is this to be able to read the data or also send some test results from a lab?

⇨ Read-only or also to to push results: read-only

Staging

The URL of the staging API server is https://staging-api.kernelci.org.

The staging instance is used for experimental features without impacting the production instance. This is useful for anything new that needs to be tested in a full KernelCI environment with results publicly available on https://staging.kernelci.org but not sent to regular mailing lists.

Do you need a token to access the staging API? If so, is this to be able to read the data or send some test results from a lab?

⇨ Read-only or also to push results: read and send test results

Common database PoC

As the Linux kernel testing landscape is very diverse, it is important for KernelCI to be modular and allow various test systems to send results in a collaborative way. The first goal is to have a common database where various systems can send changes to, with a "least common denominator" schema. This should be accompanied with a proof-of-concept report to be able to visualise and share the results.

Objectives:

  • Development database instance created with tool to send results - see kcidb
  • Results being sent from more than one source (staging.kernelci.org and Red Hat's CKI)
  • Initial schema established for kernel builds and test results
  • Basic email report sent on-demand to a limited audience Will be done as a follow-up
  • Proof-of-concept web dashboard

kselftest

kselftest is one of the main test suites that comes with the Linux kernel itself. As such, it is an obvious one to cover on kernelci.org. Doing so poses a few challenges, as it needs changes to the build system to build the tests from the kernel tree for each revision being tested and then install them on the target platform to run them.

  • #339 basic plumbing to get kselftest builds
  • #487 initial runtime coverage in LAVA labs
  • #499 first pass at getting as many tests to build
  • #636 fine-tuning of the build environment to separate kselftest errors from main kernel build errors
  • #655 first pass at fixing runtime dependencies

Produce DT validation in a structured format

In order to be able to send the device tree validation results to the backend API, they need to be stored in a structured format and follow the expected schema. Essentially, this means a JSON file with a list of test case names and a status associated with each of them (pass, fail, skip).

It also needs to contain some meta-data such as the kernel revision (tree, branch, git commit) and the version of the validation tool.

As there currently aren't any tests of this kind being run in KernelCI, some changes to the schema may be needed. These changes are outside of the scope of this issue.

Checklist:

  • Can manually run the DT validation tool with the KernelCI Docker image for this purpose in a mainline kernel tree
  • Output log and commands to run the validation shared in a comment on this issue
  • Results are stored in a structured format with test cases and status values, either directly by the tool or converted from the log using shell or Python additional tools
  • If such tools are needed, add them to the Docker image
  • Results file shared in a comment on this issue
  • Parser script reviewed and merged

build.py: switch to dtbs_install

Since v4.19(ish), the kernel has support for make dtbs_install, we should use that (when available) instead of the current manual copy.

Add stable-next branch builds

Hello,

Could we please add a new branch for -stable testing? it's a subset of -next of stable tagged commits.

Here's the diff:

diff --git a/build-configs.yaml b/build-configs.yaml
index b6f347b..b0359f9 100644
--- a/build-configs.yaml
+++ b/build-configs.yaml
@@ -100,6 +100,9 @@ trees:
   samsung:
     url: "https://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung.git"
 
+  sasha:
+    url: "https://git.kernel.org/pub/scm/linux/kernel/git/sashal/linux-stable.git"
+
   soc:
     url: "https://git.kernel.org/pub/scm/linux/kernel/git/soc/soc.git"
 
@@ -794,6 +797,11 @@ build_configs:
     branch: 'linux-5.2.y'
     variants: *stable_variants
 
+  stable_next:
+    tree: sasha
+    branch: 'stable-next'
+    variants: *stable_variants
+
   tegra:
     tree: tegra
     branch: 'for-next'

Upgrade v4l2 rootfs to Buster

The current v4l2 rootfs is still based on Debian Stretch. It should now be aligned with the other Debian Buster rootfs.

Device tree validation

The introduction of the new device tree format has made it possible to have a validation tool. This is about creating some automated tests to run this tool as a kind of "static analysis". The results should be sent to the KernelCI backend API just like any other result, except it doesn't require any kernel build or particular hardware platform.

Here's a proposed list of steps to achieve this:

  • #352 Docker image with device tree validation tools installed
  • #397 Validation results produced in a structured format that can be submitted (if the tool doesn't already do that)
  • Manual run of the tool in Docker image and results submitted to staging.kernelci.org
  • Experimental integration within current Jenkins pipeline on staging.kernelci.org
  • Plan to enable this in production (might require minor changes in backend, frontend, Jenkins, YAML config...)

more specific compiler warning string grep

for kernelCI, I see reports like:

Warnings summary:

    111  1 warning generated.
    14   2 warnings generated.
    7    drivers/net/ethernet/intel/iavf/iavf_osdep.h:49:18: warning: 'format' attribute argument not supported: gnu_printf [-Wignored-attributes]
...
    4    3 warnings generated.
    3    4 warnings generated.
...
   2    drivers/char/hw_random/optee-rng.c:177:31: warning: suggest braces around initialization of subobject [-Wmissing-braces]
    2    5 warnings generated.
    2    41 warnings generated.

when grepping logs for compiler warnings, I find that grepping for warning:, specifically with the end colon, helps filter to just the warnings. This would remove the count of warnings per file from the logs, which I think would be a little clearer. Also, I'm not sure if the "warnings generated" lines are polluting the total warning count at the top of the report? cc @khilman @broonie

Add stable queue branches

These branches are automatically generated whenever we modify the stable queue. Getting these tested will allow us to stop abusing the -rc branches just to get bots to test them.

diff --git a/build-configs.yaml b/build-configs.yaml
index 63e8184..1517eac 100644
--- a/build-configs.yaml
+++ b/build-configs.yaml
@@ -733,6 +733,46 @@ build_configs:
     branch: 'linux-5.3.y'
     variants: *stable_variants
 
+  stable-queue_4.4:
+    tree: stable-rc
+    branch: 'queue/4.4'
+    variants: *stable_variants
+    reference:
+      tree: stable
+      branch: 'linux-4.4.y'
+
+  stable-queue_4.9:
+    tree: stable-rc
+    branch: 'queue/4.9'
+    variants: *stable_variants
+    reference:
+      tree: stable
+      branch: 'linux-4.9.y'
+
+  stable-queue_4.14:
+    tree: stable-rc
+    branch: 'queue/4.14'
+    variants: *stable_variants
+    reference:
+      tree: stable
+      branch: 'linux-4.14.y'
+
+  stable-queue_4.19:
+    tree: stable-rc
+    branch: 'queue/4.19'
+    variants: *stable_variants
+    reference:
+      tree: stable
+      branch: 'linux-4.19.y'
+
+  stable-queue_5.4:
+    tree: stable-rc
+    branch: 'queue/5.4'
+    variants: *stable_variants
+    reference:
+      tree: stable
+      branch: 'linux-5.4.y'
+
   stable-rc_3.18:
     tree: stable-rc
     branch: 'linux-3.18.y'

KUnit

KUnit provides unit tests from the Linux kernel source tree. As such, it is an obvious test suite to run on kernelci.org.

The challenges are that unit tests aren't typically run on a target platform different than the one where the kernel was built, since they are essentially platform-independent tests. They should however be tested with at least one 32-bit and one 64-bit architecture to provide complete coverage.

  • KUnit coverage with standard UML builds
  • KUnit coverage with x86_64
  • Kunit coverage with a 32-bit architecture
  • Rust for Linux coverage included in KUnit coverage

[clang] additional trees/branches

per our discussion on the mailing list, I think it would be worthwhile to expand testing coverage of defconfigs from -next to mainline, and stable for 4.19, 4.14, 4.9, and maybe 4.4 (we do support arm64 and x86_64 defconfigs back to 4.4).

fstests

Summary

(x)fstests is a test suite for major file systems supported by the Linux kernel such as ext4, xfs, btrfs, cifs, nfs, f2fs...

Currently, file system kernel fixes aren't being back-ported to LTS due to a lack of automated testing to verify they don't introduce regressions. Kernel bugs need to be well identified in the first place before fixes can be made for them and then back-ported. Running fstests in KernelCI would help solve these issues.

Requirements

Some tests will require a minimum amount of physical disk storage (4 disks of 20G?)

Ideally, a variety of hardware storage medium should be used to ensure all the file system features are exercised. Some storage medium such as eMMC is not suitable for write-heavy operations but read-only file systems tests may be run on them. Running in virtual environments such as in Kubernetes or with QEMU should also be considered.

Steps

  • Create Debian rootfs image bullseye-fstests including the fstests built from source (rewrite of PR #230)
  • Add any extra subsystem or maintainer's branches to KernelCI related to file systems
  • Add a kernel config fragment to KernelCI if required to run the tests
  • Make a list of all the test cases that can be run and pick some initial ones
  • Get an initial set of tests running on platforms already available in KernelCI
  • Investigate how to run in Kubernetes with Cloud storage

Related topics

Notes

We may start by running quick tests with following command:

./check -g quick

The tests will be executed based on the block device configured as TESTDEV.

For QEMU, initial tests may be run on x86 only.

Build a "bootable" allmodconfig

Can we add a new allmodconfig build where we build allmodconfig + the architecture specific defconfig ?
The idea is that this should be a bootable configuration.

for x86
$ make allmodconfig KCONFIG_ALLCONFIG=$(pwd)/arch/x86/configs/x86_64_defconfig

for arm64
$ make ARCH=arm64 CROSS_COMPILE=... allmodconfig \
KCONFIG_ALLCONFIG=$(pwd)/arch/arm64/configs/defconfig

Mechanism to specify bootable/testable configs

When specifying which configurations are built for a given tree it would be really good if there were a shorthand for saying that a given tree should be built with configurations which are useful for runtime testing - for a lot of trees the boot/test coverage is more important than the build coverage but right now you have to explicitly enumerate the configs on a per-tree basis for this scenario. This would cut down on the number of builds we need to do.

Functional test support

KernelCI has been focusing on boot testing, but it now needs to full embrace functional testing. This means updating the kernelci.org frontend to show test results, adapt Mongo DB documents schema if needed and improve email reports with test results.

The outcome of this work is to remove all the current limiting factors that prevent KernelCI from expanding its functional testing coverage.

Web frontend work

Frontend Test Results milestone

  • Show test counts rather than boot counts everywhere
  • Fix or add test views to be able to navigate from a job / branch / kernel to a list of test cases
  • Add detailed view of each test case regression
  • Remove all remaining references to boot tests

Database backend work

Backend Test Results milestone

  • Update Mongo DB documents schema to enable all the necessary views on the frontend
  • Add Mongo DB indexes as appropriate to have good performance for the frontend use-case
  • Rework test email reports to be able to only include test regressions, with links to full results

Python 2.7 to be deprecated next year

Today after updating pip2, i got the message

DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7

Just FYI

Occasional issues with git checkouts in kernel-build-trigger

The kernel-build-trigger job occasionally hit an issue where it fails to update a git checkout using the mirror with errors as shown below:

Started by upstream project "kernel-tree-monitor" build number 4373
originally caused by:
 Started by user Guillaume Tucker
Obtained jenkins/build-trigger.jpl from git https://github.com/kernelci/kernelci-core.git
Resume disabled by user, switching to high-performance, low-durability mode.
Loading library [email protected]
Attempting to resolve kernelci.org from remote references...
 > git --version # timeout=10
 > git ls-remote -h https://github.com/kernelci/kernelci-core.git # timeout=10
Found match: refs/heads/kernelci.org revision 92edb956ae76469d2bc8dc83f5916b513331b7bd
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/kernelci/kernelci-core.git # timeout=10
Fetching without tags
Fetching upstream changes from https://github.com/kernelci/kernelci-core.git
 > git --version # timeout=10
 > git fetch --no-tags --progress https://github.com/kernelci/kernelci-core.git +refs/heads/*:refs/remotes/origin/*
Checking out Revision 92edb956ae76469d2bc8dc83f5916b513331b7bd (kernelci.org)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 92edb956ae76469d2bc8dc83f5916b513331b7bd
Commit message: "test-configs.yaml: upgrade Debian rootfs URLs"
 > git rev-list --no-walk 92edb956ae76469d2bc8dc83f5916b513331b7bd # timeout=10
[Pipeline] node
Running on builder01.kernelci.org - build-trigger in /home/buildslave/build-trigger/workspace/kernel-build-trigger@5
[Pipeline] {
[Pipeline] echo
    Config:    amlogic_integ
    Container: kernelci/build-base
[Pipeline] sh
+ docker pull kernelci/build-base
Using default tag: latest
latest: Pulling from kernelci/build-base
Digest: sha256:bbb368da3906ab32e2fe13cd04ec18373cd3378345db4229af65bebbb3215d7d
Status: Image is up to date for kernelci/build-base:latest
[Pipeline] sh
+ docker inspect -f . kernelci/build-base
.
[Pipeline] withDockerContainer
builder01.kernelci.org - build-trigger does not seem to be running inside a container
[Pipeline] {
[Pipeline] stage
[Pipeline] { (Init)
[Pipeline] timeout
Timeout set to expire in 15 min
$ docker run -t -d -u 999:1000 -w /home/buildslave/build-trigger/workspace/kernel-build-trigger@5 -v /home/buildslave/build-trigger/workspace/kernel-build-trigger@5:/home/buildslave/build-trigger/workspace/kernel-build-trigger@5:rw,z -v /home/buildslave/build-trigger/workspace/kernel-build-trigger@5@tmp:/home/buildslave/build-trigger/workspace/kernel-build-trigger@5@tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** kernelci/build-base cat
[Pipeline] {
[Pipeline] sh
+ rm -rf /home/buildslave/build-trigger/workspace/kernel-build-trigger@5/kernelci-core
[Pipeline] dir
Running in /home/buildslave/build-trigger/workspace/kernel-build-trigger@5/kernelci-core
[Pipeline] {
[Pipeline] git
Cloning the remote Git repository
$ docker top 09c175799ccbc0a61e68c0a9c33ae0e8825edfb2802a74d654bd89a98909a6f6 -eo pid,comm
Cloning repository https://github.com/kernelci/kernelci-core.git
 > git init /home/buildslave/build-trigger/workspace/kernel-build-trigger@5/kernelci-core # timeout=10
Fetching upstream changes from https://github.com/kernelci/kernelci-core.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/kernelci/kernelci-core.git +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/kernelci/kernelci-core.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git config remote.origin.url https://github.com/kernelci/kernelci-core.git # timeout=10
Fetching upstream changes from https://github.com/kernelci/kernelci-core.git
 > git fetch --tags --progress https://github.com/kernelci/kernelci-core.git +refs/heads/*:refs/remotes/origin/*
Checking out Revision 92edb956ae76469d2bc8dc83f5916b513331b7bd (refs/remotes/origin/kernelci.org)
Commit message: "test-configs.yaml: upgrade Debian rootfs URLs"
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] dir
Running in /home/buildslave/build-trigger/workspace/kernel-build-trigger@5/kernelci-core
[Pipeline] {
[Pipeline] sh
+ ./kci_build check_new_commit --config=amlogic_integ --storage=http://storage.kernelci.org
 > git rev-parse refs/remotes/origin/kernelci.org^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/kernelci.org^{commit} # timeout=10
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 92edb956ae76469d2bc8dc83f5916b513331b7bd
 > git branch -a -v --no-abbrev # timeout=10
 > git checkout -b kernelci.org 92edb956ae76469d2bc8dc83f5916b513331b7bd
[Pipeline] }
[Pipeline] // dir
[Pipeline] stage
[Pipeline] { (Tarball)
[Pipeline] dir
Running in /home/buildslave/build-trigger/workspace/kernel-build-trigger@5/kernelci-core
[Pipeline] {
[Pipeline] sh
+ ./kci_build update_mirror --config=amlogic_integ --mirror=/home/buildslave/build-trigger/workspace/kernel-build-trigger@5/linux.git
From https://git.kernel.org/pub/scm/linux/kernel/git/khilman/linux-amlogic
 + 2e4ff31d2748...f5bb70553290 for-next     -> amlogic/for-next  (forced update)
 + f24465f23522...395df5af4c78 integ        -> amlogic/integ  (forced update)
   49ed86f503be..74fc01f86f5c  v5.4/drivers -> amlogic/v5.4/drivers
   e9a12e14322d..4ccc7208c2ed  v5.4/dt64    -> amlogic/v5.4/dt64
 + f24465f23522...395df5af4c78 v5.4/integ   -> amlogic/v5.4/integ  (forced update)
[Pipeline] sh
+ ./kci_build update_repo --config=amlogic_integ --kdir=/home/buildslave/build-trigger/workspace/kernel-build-trigger@5/configs/amlogic_integ --mirror=/home/buildslave/build-trigger/workspace/kernel-build-trigger@5/linux.git
error: refs/remotes/amlogic/for-next does not point to a valid object!
error: refs/remotes/amlogic/integ does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt64 does not point to a valid object!
error: refs/remotes/amlogic/v5.4/integ does not point to a valid object!
error: refs/remotes/amlogic/for-next does not point to a valid object!
error: refs/remotes/amlogic/integ does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt64 does not point to a valid object!
error: refs/remotes/amlogic/v5.4/integ does not point to a valid object!
error: refs/remotes/amlogic/for-next does not point to a valid object!
error: refs/remotes/amlogic/integ does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt64 does not point to a valid object!
error: refs/remotes/amlogic/v5.4/integ does not point to a valid object!
error: refs/remotes/amlogic/for-next does not point to a valid object!
error: refs/remotes/amlogic/integ does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt64 does not point to a valid object!
error: refs/remotes/amlogic/v5.4/integ does not point to a valid object!
fatal: bad object HEAD
error: https://git.kernel.org/pub/scm/linux/kernel/git/khilman/linux-amlogic.git did not send all necessary objects

error: Could not fetch amlogic
error: refs/remotes/amlogic/for-next does not point to a valid object!
error: refs/remotes/amlogic/integ does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt64 does not point to a valid object!
error: refs/remotes/amlogic/v5.4/integ does not point to a valid object!
error: refs/remotes/amlogic/for-next does not point to a valid object!
error: refs/remotes/amlogic/integ does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt64 does not point to a valid object!
error: refs/remotes/amlogic/v5.4/integ does not point to a valid object!
error: refs/remotes/amlogic/for-next does not point to a valid object!
error: refs/remotes/amlogic/integ does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt64 does not point to a valid object!
error: refs/remotes/amlogic/v5.4/integ does not point to a valid object!
error: refs/remotes/amlogic/for-next does not point to a valid object!
error: refs/remotes/amlogic/integ does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt64 does not point to a valid object!
error: refs/remotes/amlogic/v5.4/integ does not point to a valid object!
error: refs/remotes/amlogic/for-next does not point to a valid object!
error: refs/remotes/amlogic/integ does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt does not point to a valid object!
error: refs/remotes/amlogic/v5.4/dt64 does not point to a valid object!
error: refs/remotes/amlogic/v5.4/integ does not point to a valid object!
fatal: bad object HEAD
error: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git did not send all necessary objects

Traceback (most recent call last):
  File "./kci_build", line 474, in <module>
    status = args.func(configs, args)
  File "./kci_build", line 234, in __call__
    kernelci.build.update_repo(conf, args.kdir, args.mirror)
  File "/home/buildslave/build-trigger/workspace/kernel-build-trigger@5/kernelci-core/kernelci/build.py", line 196, in update_repo
    _fetch_tags(path)
  File "/home/buildslave/build-trigger/workspace/kernel-build-trigger@5/kernelci-core/kernelci/build.py", line 162, in _fetch_tags
    """.format(path=path, url=url))
  File "/home/buildslave/build-trigger/workspace/kernel-build-trigger@5/kernelci-core/kernelci/__init__.py", line 25, in shell_cmd
    return subprocess.check_output(cmd, shell=True)
  File "/usr/lib/python2.7/subprocess.py", line 223, in check_output
    raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '
cd /home/buildslave/build-trigger/workspace/kernel-build-trigger@5/configs/amlogic_integ
git fetch --tags git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
' returned non-zero exit status 1
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
$ docker stop --time=1 09c175799ccbc0a61e68c0a9c33ae0e8825edfb2802a74d654bd89a98909a6f6
[Pipeline] // withDockerContainer
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE

kci_build doesn't log commands to merge config fragements

A variant on #137 but one surprising thing in the output of kci_build is that it doesn't show the commands where frag.config is merged into whatever the base config is in the output. For example the log for https://storage.kernelci.org/next/master/next-20190808/arm64/defconfig+CONFIG_CPU_BIG_ENDIAN=y/clang-8/build.log just has the make command to generate defconfig and then the make command to start the actual build. This isn't as clear as it could be to someone trying to track down an issue.

#
# make KBUILD_BUILD_USER=KernelCI -C/home/buildslave/workspace/workspace/kernel-build@8/linux ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- HOSTCC=clang CC="ccache clang" O=build defconfig
#
make: Entering directory '/home/buildslave/workspace/workspace/kernel-build@8/linux'

...

*** Default configuration is based on 'defconfig'
#
# configuration written to .config
#
make[1]: Leaving directory '/home/buildslave/workspace/workspace/kernel-build@8/linux/build'
make: Leaving directory '/home/buildslave/workspace/workspace/kernel-build@8/linux'
#
# make KBUILD_BUILD_USER=KernelCI -C/home/buildslave/workspace/workspace/kernel-build@8/linux -j4 ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- HOSTCC=clang CC="ccache clang" O=build Image
#

[clang] enable lld

per a discussion today with Arnd, it would be good to make the clang builds use LD=ld.lld for their linkers. The kernel really stresses LLD's linker script support, so it would be good to get it under coverage soon. Should work with arm64.

Mail reports don't show clang boot problems

Currently (at least for -next) we are experiencing a number of boot problems for clang, both clang specific (arm64 CONFIG_BIG_ENDIAN) and generic ones that affect both GCC and clang however none of the clang failures seem to be showing up in the reports e-mailed to the list.

@mattface

build.py: use CCACHE_DIR from environment if present

Currently build.py sets CCACHE_DIR in the linux source dir (aka kdir). When building in a cloud VM environment, where we may have persistent volumes elsewhere, the environment might want to set CCACHE_DIR instead of relying on the build.py default.

Clang support

The mainline Linux kernel can now be built with both GCC and LLVM/Clang, at least for some architectures such as arm64. The initial aim for KernelCI is to be able to support multiple compilers in general, and to ensure that Clang kernel builds are working as expected.

Objectives:

  • build-configs.yaml and kci_build support for multiple compilers to produce kernel builds
  • backend database schema updated to support multiple compilers
  • email reports updated to show the compiler used in builds and test results
  • frontend web dashboard updated to show the compiler used in builds and test results
  • enable lld #104
  • fix extra config options support (fragments...) #136

Add branch android-3.18-preview from Lee's Korg repo

Each git kernel branch is monitored every hour by kernelci.org. Whenever a new
revision is detected, it will be built for a number of combinations of
architectures, defconfigs and compilers. Then a build report will be sent,
some tests will be run and test reports will also be sent.

Please provide the information described below in order to add a new branch to
kernelci.org:

  • How much build coverage do you need for your branch?

Generally speaking, a good rule is to build fewer variants for branches that
are "further" away from mainline and closer to individual developers. This can
be fine-tuned with arbitrary filters, but essentially there are 3 main options:

  1. Build everything, including allmodconfig, which amounts to about 220 builds.
    This is we do with linux-next.

  2. Skip a few things such as allmodconfig as it's very long to build and
    doesn't really boot, and also architectures that are less useful such as MIPS
    which saves 80 builds and doesn't have much test platforms in KernelCI. This
    is we do with some subsystems such as linux-media.

  3. Build only the main defconfig for each architecture to save a lot of build
    power, get the fastest results and highest boots/builds ratio. This is what do
    with some maintainer branches such as linusw' GPIO branch.

⇨ Choice: 1 (allmodconfig, but only for x86_64, arm and arm64 needed though)

  • How often do you expect this branch to be updated?

If you push once a week or less, it's easier to allow for many build variants
as this reduces the build load on average. Conversely, if you push several
times every day then a small set of builds should be used.

It's also possible to increase the build capacity if needed but this comes with
a cost. Avoiding unnecessary builds is always a good way to reduce turnaround
time and not waste resources.

⇨ Estimated frequency: ~4 times a week

  • Who should the email reports be sent to?

Standard email reports inlude builds and basic tests that are run on all
platforms. Please provide a list of email recipients for these. Typical ones
are the regular KernelCI reports list, kernel mailing lists associated with the
changes going into the branch and related maintainers.

⇨ Recipients: lee jones linaro org

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.