Coder Social home page Coder Social logo

autotest / autotest Goto Github PK

View Code? Open in Web Editor NEW
696.0 696.0 353.0 92.77 MB

Autotest - Fully automated tests on Linux

Home Page: http://autotest.github.io

License: Other

Makefile 0.11% Python 82.41% HTML 0.45% C 3.43% Perl 0.89% Shell 0.66% CSS 0.28% JavaScript 0.65% Java 11.08% Roff 0.03%
linux python testing

autotest's Introduction

Autotest: Fully automated tests under the linux platform

Autotest is a framework for fully automated testing. It is designed primarily to test the Linux kernel, though it is useful for many other functions such as qualifying new hardware. It's an open-source project under the GPL and is used and developed by a number of organizations, including Google, IBM, Red Hat, and many others.

Autotest is composed of a number of modules that will help you to do stand alone tests or setup a fully automated test grid, depending on what you are up to. A non extensive list of modules is:

  • Autotest client: The engine that executes the tests (dir client). Each autotest test is a directory inside (client/tests) and it is represented by a python class that implements a minimum number of methods. The client is what you need if you are a single developer trying out autotest and executing some tests. Autotest client executes ''client side control files'', which are regular python programs, and leverage the API of the client.
  • Autotest server: A program that copies the client to remote machines and controls their execution. Autotest server executes ''server side control files'', which are also regular python programs, but leverage a higher level API, since the autotest server can control test execution in multiple machines. If you want to perform tests slightly more complex involving more than one machine you might want the autotest server
  • Autotest database: For test grids, we need a way to store test results, and that is the purpose of the database component. This DB is used by the autotest scheduler and the frontends to store and visualize test results.
  • Autotest scheduler: For test grids, we need an utility that can schedule and trigger job execution in test machines, the autotest scheduler is that utility.
  • Autotest web frontend: For test grids, A web app, whose backend is written in django (http://www.djangoproject.com/) and UI written in gwt (http://code.google.com/webtoolkit/), lets users to trigger jobs and visualize test results
  • Autotest command line interface: Alternatively, users also can use the autotest CLI, written in python

Getting started with autotest client

For the impatient:

http://autotest.readthedocs.org/en/latest/main/local/ClientQuickStart.html

Installing the autotest server

For the impatient using Red Hat:

http://autotest.readthedocs.org/en/latest/main/sysadmin/AutotestServerInstallRedHat.html

For the impatient using Ubuntu/Debian:

http://autotest.readthedocs.org/en/latest/main/sysadmin/AutotestServerInstall.html

You are advised to read the documentation carefully, specially with details regarding appropriate versions of Django autotest is compatible with.

Main project page

http://autotest.github.com/

Documentation

Autotest comes with in tree documentation, that can be built with sphinx. A publicly available build of the latest master branch documentation and releases can be seen on read the docs:

http://autotest.readthedocs.org/en/latest/index.html

It is possible to consult the docs of released versions, such as:

http://autotest.readthedocs.org/en/0.16.0/

If you want to build the documentation, here are the instructions:

  1. Make sure you have the package python-sphinx installed. For Fedora:

    $ sudo yum install python-sphinx
  2. For Ubuntu/Debian:

    $ sudo apt-get install python-sphinx
  3. Optionally, you can install the read the docs theme, that will make your in-tree documentation to look just like in the online version:

    $ sudo pip install sphinx_rtd_theme
  4. Build the docs:

    $ make -C documentation html
  5. Once done, point your browser to:

    $ [your-browser] docs/build/html/index.html

Mailing list and IRC info

http://autotest.readthedocs.org/en/latest/main/general/ContactInfo.html

Grabbing the latest source

https://github.com/autotest/autotest

Hacking and submitting patches

http://autotest.readthedocs.org/en/latest/main/developer/SubmissionChecklist.html

Downloading stable versions

https://github.com/autotest/autotest/releases

Next Generation Testing Framework

Please check Avocado, a next generation test automation framework being developed by several of the original Autotest team members:

http://avocado-framework.github.io/

autotest's People

Contributors

amoskong avatar antonblanchard avatar awhitcroft avatar cevich avatar chuanchangjia avatar clebergnu avatar crangeratgoogle avatar dalecurtis avatar dzickusrh avatar ehabkost avatar gpshead avatar jadmanski avatar jasowang avatar jmeurin avatar kraxel avatar krcmarik avatar lanewolf avatar ldoktor avatar lmr avatar manul7 avatar pevogam avatar phrdina avatar pradeepkumars avatar qiumike avatar ruda avatar rvkubiak avatar tang-chen avatar vi-patel avatar ypu avatar zhouqt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

autotest's Issues

monotonic_time compile problem

When running make in client/tests/monotonic_time/src on a Fedora 16 box I get problem with some in-line assembly:

cc -g -O -std=gnu99 -Wall -c -o time_test.o time_test.c
spinlock.h: Assembler messages:
spinlock.h:17: Error: incorrect register %rax' used withl' suffix
spinlock.h:25: Error: incorrect register %rax' used withl' suffix
...repeat...

I tried sticking some #ifdef's around it to remove the l suffix when _LP64 is defined. That let's it compile, but running the resulting binary produces Illegal Instruction. I also tried redefining spinlock_t as uint32_t with the same results.

I downloaded the original time-warp-test.c by Ingo Molnar and it complains about the same thing.

I'm told this test compiles/runs and is used extensively on RHEL 5, 6, and Fedora 15. The F16 KVM guest has:

[root@localhost src]# uname -a
Linux localhost.localdomain 3.1.0-7.fc16.x86_64 #1 SMP Tue Nov 1 21:10:48 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost src]# gcc --version
gcc (GCC) 4.6.2 20111027 (Red Hat 4.6.2-1)
...

I'm not sure what other info. would be helpful.

KVM autotest: Make it possible to use several different KVM userspace in a single test job/test

There are certain test use cases that require us to use more than one qemu binary, example, windows activation tests. Hence, we need to allow usage of more than one qemu binary during a single test or job.

My initial implementation idea:

  1. Turn qemu to be a configurable option very much like images, cdroms, nics, etc, such as:
qemu_binary_upstream = '/path/1'
qemu_binary_rhel6 = '/path/2'
...
qemu_binary_n = '/path/n'

So people in need of testing multiple qemus can set them and have them all track down by autotest

  1. Add support in the build test for building multiple user spaces

  2. Make qemu_binary to be a VM param. This way if we need to change the userspace a given VM uses, just update the params and start the vm again. Noticing different params, the VM will be restarted using the alternate userspace.

Allow multiple site extensions to be enabled in autotest

It'd be an interesting feature to have site extensions to behave like plugins. Say on the autotest CLI we want to add one extension that enables virt testing features and another that enables performance testing features.

This task is a placeholder for the work we might be doing on that.

Consider moving from Paramiko to its fork, ssh

Currently we have an alternate implementation of SSH based hosts, that is not selected by default. That implementation
is called ParamikoHost. It uses the Paramiko py implementation.

People at the fabric project (https://github.com/fabric/fabric) have forked the library, under the perception that the upstream maintainers are not responsive enough, see:

fabric/fabric#275

So a fork was created, and called ssh. Here's the latest package:

http://pypi.python.org/pypi/ssh/1.7.9

Still could not find the actual git repo for that, but in any case, this all made me wonder if we should jump out of Paramiko and get into the ssh bandwagon.

Capture guest syslogs in test results

As part of virtualization testing, it would be really handy to have an easy way to capture the guest logs. Previously something similar was done here:
http://patchwork.test.kernel.org/patch/3504/
http://patchwork.test.kernel.org/patch/3505/
http://patchwork.test.kernel.org/patch/3506/

However, this could be handled much easier (IMHO) by using remote syslog protocol which is widely supported across distributions. For example, during a fedora kickstart-based install, you can specify the kickstart options:

logging --host x.y.z --port 12345 --level DEBUG

Ideally there would be a per-guest listener setup on unique ports of test server, to capture and/or monitor the incoming logs just for that guest. The protocol is UDP, so if the listener dies or doesn't exist, it won't affect operations of the guest. It also would provide a convenient channel for logging system-level test info. (i.e. via the logger command, talking to syslogd, or rsyslog on guest).

Ref: syslog protocol - http://tools.ietf.org/html/rfc5424
example listener in python - http://rxlogd.cvs.sourceforge.net/viewvc/rxlogd/rxlogd/rxlogd?revision=1.8&view=markup

Test new patch: update mac pool after receiving ACK packet

commit d9bab5b coursed two regression bugs, lucas had reverted it.
I've re-written this patch, it works with old and latest tcpdump ( installation/boot/shutdown/migration/...)
New patch already exists in my tree, more tests are needed because it effects all the tests.

Amos

pruning autotest results

On 11/30/2011 08:52 AM, Kamil Paral wrote:

Hello Lucas,

I have a question for you about autotest. We are using autotest for scheduling AutoQA tests. At our production server the current test ID is about 240 000. We don't store all the test results on disk, but all the metadata are available in the mysql database. And it starts to slow down noticeably. The web UI is not as responsive as it used to be. Sometimes httpd processes take a lot of memory. "atest job list" became completely unusable, it ends with an error:

The amount of jobs you guys are running are pushing the limits of the
framework, that's good :)

Operation get_jobs_summary failed:
OperationalError: (1153, "Got a packet bigger than 'max_allowed_packet' bytes")

Is it possible to prune the database somehow and get rid of the old test metadata (let's say everything older than two months)? Are there some docs somewhere about regarding this task?

We never had to deal with such a massive amount of jobs. For comparison,
for KVM testing we are at 2200. So we will have to create a bug on
autotest and work on an admin script to purge data from specific jobs.
Sorry I don't have anything out ready to help, would you open a ticket
on the github database?

Cheers,

Lucas

Display timezones in timestamps

Autotest displays timestamps in lots of places (especially in the web UI). The timestamps are missing timezone information. If you access a server running somewhere else in the world and need to look at some issue, it becomes very hard to find out when the job was started/finished/etc.

Please display all timestamps with timezone information. Either force all timestamps to be in UTC (like this 2011-12-12 01:20:37 UTC), or display them in local time, but with proper UTC offset (like this 2011-12-12 01:20:37 UTC+5). It will help greatly when debugging issues.

Thanks.

Virt: enable auto setup of PXE server

Currently the PXE test depends on a previously configured PXE server. This is suboptimal, as it limits where the test can be run. For example, when using libvirt's virbr0 bridge, there's no way to test PXE out of the box.

Evaluate and implement a simple way to setup and start an TFTP server during the test setup.

API: tar_package() exclude_string is confusing

File client/common_lib/base_packages.py:

def tar_package(self, pkg_name, src_dir, dest_dir, exclude_string=None):
    '''
    Create a tar.bz2 file with the name 'pkg_name' say test-blah.tar.bz2.
    Excludes the directories specified in exclude_string while tarring
    the source. Returns the tarball path.
    '''

    ...

    cmd = "tar -cvjf %s -C %s %s " % (temp_path, src_dir, exclude_string)

It seems very probable that "exclude_string" is not really exclude string, but rather include string. The list of arguments to tar specifies the list of files/directories to include in the archive, not to exclude from it.

Either the 'cmd' variable is wrong, or "exclude_string" should be renamed to something like "included_files" or something like that.

Unhandled MonitorSocketError: Could not receive data from monitor /SLES11SP1

After 1 step install , the socket was broken, autoyast can't perform the configuration part.

Unhandled MonitorSocketError: Could not receive data from monitor ([Errno 104] Connection reset by peer) [context: waiting for installation to finish]

source : autotest-autotest-0.13.1-194-g2b0da9d.tar.gz
HOST : SUSE Linux Enterprise Server 11 SP1
guest iso :SLES-11-SP1-DVD-x86_64-GM-DVD1.iso

Virt: Turn timeout values into variable definitions

Currently we have lots of timeouts set, for example in the kvm_monitor code, with plain values:

def _acquire_lock(self, timeout=20):
...

def _data_available(self, timeout=0):
...

Turn those into global (module scope) variable definition for better documentation and easier tracking, example:

MONITOR_LOCK_TIMEOUT = 20
MONITOR_DATA_TIMEOUT = 0

def _acquire_lock(self, timeout=MONITOR_LOCK_TIMEOUT):
...

def _data_available(self, timeout=MONITOR_DATA_TIMEOUT):
...

Problem when parsing multi host tests

Hi,

When running netperf server test result directory is not fully scanned by the parser. The client tests results, which have the most intersting part - the benchmark results, are not scanned.
It seems that it is because the hosts directories (in my case rsws19 and rsws20) are not in the same directory level that '.machines' files is in - but in the client test level (netperf_mlx).

Is it a problem in the test - or is the parser broken?

Here is a dump of my result directory structure:

|-- control.srv
|-- debug
|   |-- autoserv.DEBUG
|   |-- autoserv.ERROR
|   |-- autoserv.INFO
|   `-- autoserv.WARNING
|-- keyval
|-- .machines
|-- netperf_mlx
|   |-- debug
|   |   |-- netperf_mlx.DEBUG
|   |   |-- netperf_mlx.ERROR
|   |   |-- netperf_mlx.INFO
|   |   `-- netperf_mlx.WARNING
|   |-- keyval
|   |-- profiling
|   |-- results
|   |-- rsws19
|   |   |-- control
|   |   |-- debug
|   |   |   |-- client.0.DEBUG
|   |   |   |-- client.0.ERROR
|   |   |   |-- client.0.INFO
|   |   |   `-- client.0.WARNING
|   |   |-- job_report.html
|   |   |-- netperf_mlx.TCP_STREAM
|   |   |   |-- debug
|   |   |   |   |-- netperf_mlx.TCP_STREAM.DEBUG
|   |   |   |   |-- netperf_mlx.TCP_STREAM.ERROR
|   |   |   |   |-- netperf_mlx.TCP_STREAM.INFO
|   |   |   |   `-- netperf_mlx.TCP_STREAM.WARNING
|   |   |   |-- keyval
|   |   |   |-- profiling
|   |   |   |-- results
|   |   |   |   `-- keyval
|   |   |   |-- status
|   |   |   `-- sysinfo
|   |   |       |-- added_packages
|   |   |       |-- df
|   |   |       |-- dmesg.gz
|   |   |       |-- iteration.1
|   |   |       |   |-- interrupts.after
|   |   |       |   |-- interrupts.before
|   |   |       |   |-- meminfo.after
|   |   |       |   |-- meminfo.before
|   |   |       |   |-- schedstat.after
|   |   |       |   |-- schedstat.before
|   |   |       |   |-- slabinfo.after
|   |   |       |   `-- slabinfo.before
|   |   |       |-- iteration.2
|   |   |       |   |-- interrupts.after
|   |   |       |   |-- interrupts.before
|   |   |       |   |-- meminfo.after
|   |   |       |   |-- meminfo.before
|   |   |       |   |-- schedstat.after
|   |   |       |   |-- schedstat.before
|   |   |       |   |-- slabinfo.after
|   |   |       |   `-- slabinfo.before
|   |   |       |-- iteration.3
|   |   |       |   |-- interrupts.after
|   |   |       |   |-- interrupts.before
|   |   |       |   |-- meminfo.after
|   |   |       |   |-- meminfo.before
|   |   |       |   |-- schedstat.after
|   |   |       |   |-- schedstat.before
|   |   |       |   |-- slabinfo.after
|   |   |       |   `-- slabinfo.before
|   |   |       |-- iteration.4
|   |   |       |   |-- interrupts.after
|   |   |       |   |-- interrupts.before
|   |   |       |   |-- meminfo.after
|   |   |       |   |-- meminfo.before
|   |   |       |   |-- schedstat.after
|   |   |       |   |-- schedstat.before
|   |   |       |   |-- slabinfo.after
|   |   |       |   `-- slabinfo.before
|   |   |       |-- iteration.5
|   |   |       |   |-- interrupts.after
|   |   |       |   |-- interrupts.before
|   |   |       |   |-- meminfo.after
|   |   |       |   |-- meminfo.before
|   |   |       |   |-- schedstat.after
|   |   |       |   |-- schedstat.before
|   |   |       |   |-- slabinfo.after
|   |   |       |   `-- slabinfo.before
|   |   |       |-- messages.gz
|   |   |       |-- reboot_current -> ../../sysinfo
|   |   |       `-- removed_packages
|   |   |-- status
|   |   |-- status.log
|   |   `-- sysinfo
|   |       |-- cmdline
|   |       |-- cpuinfo
|   |       |-- df
|   |       |-- dmesg.gz
|   |       |-- gcc_--version
|   |       |-- hostname
|   |       |-- installed_packages
|   |       |-- interrupts
|   |       |-- ld_--version
|   |       |-- lspci_-vvn
|   |       |-- meminfo
|   |       |-- modules
|   |       |-- mount
|   |       |-- partitions
|   |       |-- proc_mounts
|   |       |-- slabinfo
|   |       |-- uname
|   |       |-- uptime
|   |       `-- version
|   |-- rsws20
|   |   |-- control
|   |   |-- debug
|   |   |   |-- client.0.DEBUG
|   |   |   |-- client.0.ERROR
|   |   |   |-- client.0.INFO
|   |   |   `-- client.0.WARNING
|   |   |-- job_report.html
|   |   |-- netperf_mlx.TCP_STREAM
|   |   |   |-- debug
|   |   |   |   |-- netperf_mlx.TCP_STREAM.DEBUG
|   |   |   |   |-- netperf_mlx.TCP_STREAM.ERROR
|   |   |   |   |-- netperf_mlx.TCP_STREAM.INFO
|   |   |   |   `-- netperf_mlx.TCP_STREAM.WARNING
|   |   |   |-- keyval
|   |   |   |-- profiling
|   |   |   |-- results
|   |   |   |   `-- keyval
|   |   |   |-- status
|   |   |   `-- sysinfo
|   |   |       |-- added_packages
|   |   |       |-- df
|   |   |       |-- dmesg.gz
|   |   |       |-- iteration.1
|   |   |       |   |-- interrupts.after
|   |   |       |   |-- interrupts.before
|   |   |       |   |-- meminfo.after
|   |   |       |   |-- meminfo.before
|   |   |       |   |-- schedstat.after
|   |   |       |   |-- schedstat.before
|   |   |       |   |-- slabinfo.after
|   |   |       |   `-- slabinfo.before
|   |   |       |-- iteration.2
|   |   |       |   |-- interrupts.after
|   |   |       |   |-- interrupts.before
|   |   |       |   |-- meminfo.after
|   |   |       |   |-- meminfo.before
|   |   |       |   |-- schedstat.after
|   |   |       |   |-- schedstat.before
|   |   |       |   |-- slabinfo.after
|   |   |       |   `-- slabinfo.before
|   |   |       |-- iteration.3
|   |   |       |   |-- interrupts.after
|   |   |       |   |-- interrupts.before
|   |   |       |   |-- meminfo.after
|   |   |       |   |-- meminfo.before
|   |   |       |   |-- schedstat.after
|   |   |       |   |-- schedstat.before
|   |   |       |   |-- slabinfo.after
|   |   |       |   `-- slabinfo.before
|   |   |       |-- iteration.4
|   |   |       |   |-- interrupts.after
|   |   |       |   |-- interrupts.before
|   |   |       |   |-- meminfo.after
|   |   |       |   |-- meminfo.before
|   |   |       |   |-- schedstat.after
|   |   |       |   |-- schedstat.before
|   |   |       |   |-- slabinfo.after
|   |   |       |   `-- slabinfo.before
|   |   |       |-- iteration.5
|   |   |       |   |-- interrupts.after
|   |   |       |   |-- interrupts.before
|   |   |       |   |-- meminfo.after
|   |   |       |   |-- meminfo.before
|   |   |       |   |-- schedstat.after
|   |   |       |   |-- schedstat.before
|   |   |       |   |-- slabinfo.after
|   |   |       |   `-- slabinfo.before
|   |   |       |-- messages.gz
|   |   |       |-- reboot_current -> ../../sysinfo
|   |   |       `-- removed_packages
|   |   |-- .parse.lock
|   |   |-- status
|   |   |-- status.log
|   |   `-- sysinfo
|   |       |-- cmdline
|   |       |-- cpuinfo
|   |       |-- df
|   |       |-- dmesg.gz
|   |       |-- gcc_--version
|   |       |-- hostname
|   |       |-- installed_packages
|   |       |-- interrupts
|   |       |-- ld_--version
|   |       |-- lspci_-vvn
|   |       |-- meminfo
|   |       |-- modules
|   |       |-- mount
|   |       |-- partitions
|   |       |-- proc_mounts
|   |       |-- slabinfo
|   |       |-- uname
|   |       |-- uptime
|   |       `-- version
|   `-- status.log
|-- .parse.lock
|-- status.log
`-- sysinfo

Thanks,
Amir Vadai

Improvements to make to virt.virt_video_maker

We need to:

  • Make it quieter - Remove the debugging messages as they are not needed usually
  • Use the add() API rather than add_many(), see warning printed:

11/16 13:41:30 ERROR| warnings:0029| /usr/local/autotest/virt/virt_video_maker.py:189: DeprecationWarning: gst.Bin.add_many() is deprecated, use gst.Bin.add()
11/16 13:41:30 ERROR| warnings:0029| pipeline.add_many(source, decoder, encoder, container, output)

  • Make it output vp8 with webm container if such an option is available. If not, fall back to theora and ogg container.

boottool: write replacement for perl-based version

client/tools/boottool has been used successfully to manage boot entries (querying, adding, removing and updating them), but it's showing its age. New Linux distributions, such as Fedora 16, ship with grub2, which is not supported by boottool. On the other hand, grubby has support for all the bootloaders that boottool supports and more, including grub2.

So, let's write a replacement for boottool. The plan is:

  • Write a new boottool, in python, that leverages grubby
  • Keep the current boottool around, and use it by default on all bootloaders but grub2
  • Test one bootloader at a time, starting with grub (v1), and then make the new boottool responsible for it
  • When all bootloaders have been tested on the new boottool, drop the current perl-based boottool

We have to be careful and support:

  • The command-line arguments syntax and semantics
  • The client side API (which is also used in the server side API)
  • Grubby local installation and version check on client side
  • Pushing grubby to hosts on server side

Explain job_timeout_default versus job_max_runtime_hrs_default

There are these two options in global_config.ini:

job_timeout_default: 72
job_max_runtime_hrs_default: 72

http://autotest.kernel.org/wiki/GlobalConfig mentions only job_timeout_default.

  1. Please explain more what the first option means. Is that a timeout that applies for jobs in the Queued state (i.e. if it is not started in 72 hours, abort that job)? Or does it apply for jobs in Running state?
  2. Please add documentation for the second option.

Currently I have no idea what they are supposed to do.

Add conditional aliases to the apache config files

Currently, there are aliases in the apache config files pointing to one location. Regarding to the proper re-packaging of Autotest it would be helpful to have conditional aliases that would choose the first existing location as it has been done in apache/conf/django-directives.conf.

Move the function to perform nic hotplugging to common autotest API

It was rightfully pointed out by Alex Williamson that we need to move the functions present on nic_hotplug.py to common autotest API, for code reuse, allowing people to write more complex tests (say hotplug + migration) more easily.

This is a placeholder task that is here to remind me to finish it and commit it upstream :)

libvirt test: Xen HVM install not working

Xen HVM guest install fail as it tries to download boot.iso file that do not exists in created tree. As HVM supports only cdrom or PXE install, boot iso has to be crated for hvm install.

Write a new version of Boottool

Boottool is the component that allows us to register kernels built/installed by Autotest into the client machine OS boot loader.

Currently boottool has support for several boot loaders, including grub, lilo, yaboot and zipl. However, grub2 is out, already being used by Ubuntu and will become the default in Fedora 16. It is a perl codebase, and while that is OK, having to support grub2 is a fairly good excuse to start implementing boottool in python, for the sake of consistency.

Pros: 1) More consistency across the autotest codebase 2) Better maintainability. Giving maintenance to perl code is not the best experience ever (although boottool code is pretty decent, considering what I've seen).

Cons: 2) New software, new bugs, we might have some headaches until we have a good version of the new code. Can be somewhat mitigated by paying close attention to the original code while writing the new one.

Let's start evaluating the options.

libvirt unattended_install.cdrom broken for RHEL 5.7

Anaconda throws up an error-dialog about the install CD not matching the distribution. Install of RHEL 5.7 works perfectly fine under KVM test. RHEL 5.6 install works perfectly fine under both libvirt and KVM test. Copying the DVD image and ks.iso image under /var/lib/libvirt/images/ seems to resolve the problem eventhough SELinux is in permissive mode. I have a hunch this problem may be caused by a bug in the ordering of CD drives presented to the guest. Need more research to be sure.

Jobs are aborted prematurely

The jobs are aborted too early. I have max runtime set to 2 hours, but the jobs are aborted even after 1 minute of runtime! It seems that there is some watchdog running periodically every hour (in my case every second minute of every hour) and it just cancels some (all?) jobs that are running at that time.

$ grep job_ ~autotest/global_config.ini
job_timeout_default: 72
job_max_runtime_hrs_default: 2
$ grep 9542 ~autotest/logs/scheduler.log.2011-12-14-04.00.22 
12/15 19:01:04 INFO |scheduler_:0496| Assigning host virt24.qa to entry no host/9542 (9542) Queued
12/15 19:01:04 INFO |scheduler_:0516| creating block 9542/7
12/15 19:01:09 INFO |scheduler_:0673| post-koji-build:rpmlint.noarch/6/None (job 9542, entry 9542) scheduled on virt24.qa, status=Queued
12/15 19:01:09 INFO |scheduler_:0561| virt24.qa/9542 (9542) Queued -> Verifying
12/15 19:01:15 INFO |scheduler_:0561| virt24.qa/9542 (9542) Verifying [active] -> Verifying
12/15 19:01:20 INFO |scheduler_:0561| virt24.qa/9542 (9542) Verifying [active] -> Pending
12/15 19:01:20 INFO |scheduler_:0561| virt24.qa/9542 (9542) Pending [active] -> Starting
12/15 19:01:25 INFO |scheduler_:0561| virt24.qa/9542 (9542) Starting [active] -> Running
12/15 19:01:25 INFO |drone_mana:0539| command = ['nice', '-n', '10', '/usr/share/autotest/server/autoserv', '-p', '-r', u'/usr/share/autotest/results/9542-autotest/virt24.qa', '-m', u'virt24.qa', '-u', u'autotest', '-l', u'post-koji-build:rpmlint.noarch', '-P', u'9542-autotest/virt24.qa', '-n', '/usr/share/autotest/results/drone_tmp/attach.1142', '-c']
12/15 19:01:25 INFO |drone_mana:0565| monitoring pidfile /usr/share/autotest/results/9542-autotest/virt24.qa/.autoserv_execute
12/15 19:02:12 WARNI|monitor_db:0085| Aborting entry virt24.qa/9542 (9542) due to max runtime
12/15 19:02:12 WARNI|scheduler_:0183| initialized <class 'autotest_lib.scheduler.scheduler_models.HostQueueEntry'> 9542 instance requery is updating: {'aborted': (0, 1)}
12/15 19:02:12 INFO |monitor_db:0734| Aborting virt24.qa/9542 (9542) Running [active,aborted]
12/15 19:02:12 INFO |scheduler_:0561| virt24.qa/9542 (9542) Running [active,aborted] -> Gathering
12/15 19:02:23 INFO |monitor_db:0734| Aborting virt24.qa/9542 (9542) Gathering [active,aborted]
12/15 19:02:23 INFO |drone_mana:0565| monitoring pidfile /usr/share/autotest/results/9542-autotest/virt24.qa/.collect_crashinfo_execute
12/15 19:02:23 INFO |drone_mana:0539| command = ['nice', '-n', '10', '/usr/share/autotest/server/autoserv', '-p', '--pidfile-label=collect_crashinfo', '--use-existing-results', '--collect-crashinfo', '-m', u'virt24.qa', '-r', u'/usr/share/autotest/results/9542-autotest/virt24.qa']
12/15 19:02:23 INFO |drone_mana:0540| log file = localhost:/usr/share/autotest/results/9542-autotest/virt24.qa/.collect_crashinfo.log
12/15 19:02:29 INFO |monitor_db:0734| Aborting virt24.qa/9542 (9542) Gathering [active,aborted]
12/15 19:02:29 INFO |scheduler_:0561| virt24.qa/9542 (9542) Gathering [active,aborted] -> Parsing
12/15 19:02:34 INFO |monitor_db:0734| Aborting virt24.qa/9542 (9542) Parsing [aborted]
12/15 19:02:34 INFO |drone_mana:0565| monitoring pidfile /usr/share/autotest/results/9542-autotest/virt24.qa/.parser_execute
12/15 19:02:34 INFO |drone_mana:0539| command = ['nice', '-n', '10', '/usr/share/autotest/tko/parse', '--write-pidfile', '-l', '2', '-r', '-o', u'/usr/share/autotest/results/9542-autotest/virt24.qa']
12/15 19:02:34 INFO |drone_mana:0540| log file = localhost:/usr/share/autotest/results/9542-autotest/virt24.qa/.parse.log
12/15 19:02:39 INFO |monitor_db:0734| Aborting virt24.qa/9542 (9542) Parsing [aborted]
12/15 19:02:39 INFO |scheduler_:0561| virt24.qa/9542 (9542) Parsing [aborted] -> Archiving
12/15 19:02:44 INFO |monitor_db:0734| Aborting virt24.qa/9542 (9542) Archiving [aborted]
12/15 19:02:44 INFO |drone_mana:0565| monitoring pidfile /usr/share/autotest/results/9542-autotest/virt24.qa/.archiver_execute
12/15 19:02:44 INFO |drone_mana:0539| command = ['nice', '-n', '10', '/usr/share/autotest/server/autoserv', '-p', '--pidfile-label=archiver', '-r', u'/usr/share/autotest/results/9542-autotest/virt24.qa', '--use-existing-results', '--control-filename=control.archive', '/usr/share/autotest/scheduler/archive_results.control.srv']
12/15 19:02:44 INFO |drone_mana:0540| log file = localhost:/usr/share/autotest/results/9542-autotest/virt24.qa/.archiving.log
12/15 19:02:49 INFO |monitor_db:0734| Aborting virt24.qa/9542 (9542) Archiving [aborted]
12/15 19:02:49 INFO |scheduler_:0561| virt24.qa/9542 (9542) Archiving [aborted] -> Aborted
12/15 19:02:49 INFO |drone_mana:0577| forgetting pidfile /usr/share/autotest/results/9542-autotest/virt24.qa/.autoserv_execute
12/15 19:02:49 INFO |drone_mana:0577| forgetting pidfile /usr/share/autotest/results/9542-autotest/virt24.qa/.collect_crashinfo_execute
12/15 19:02:49 INFO |drone_mana:0577| forgetting pidfile /usr/share/autotest/results/9542-autotest/virt24.qa/.parser_execute
12/15 19:02:49 INFO |drone_mana:0577| forgetting pidfile /usr/share/autotest/results/9542-autotest/virt24.qa/.archiver_execute
$ grep 9568 ~autotest/logs/scheduler.log.2011-12-14-04.00.22 
12/15 22:50:27 INFO |scheduler_:0496| Assigning host virt26.qa to entry no host/9568 (9568) Queued
12/15 22:50:27 INFO |scheduler_:0516| creating block 9568/4
12/15 22:50:32 INFO |scheduler_:0673| post-koji-build:rpmguard.noarch/6/None (job 9568, entry 9568) scheduled on virt26.qa, status=Queued
12/15 22:50:32 INFO |scheduler_:0561| virt26.qa/9568 (9568) Queued -> Verifying
12/15 22:50:37 INFO |scheduler_:0561| virt26.qa/9568 (9568) Verifying [active] -> Verifying
12/15 22:50:42 INFO |scheduler_:0561| virt26.qa/9568 (9568) Verifying [active] -> Pending
12/15 22:50:42 INFO |scheduler_:0561| virt26.qa/9568 (9568) Pending [active] -> Starting
12/15 22:50:48 INFO |scheduler_:0561| virt26.qa/9568 (9568) Starting [active] -> Running
12/15 22:50:48 INFO |drone_mana:0539| command = ['nice', '-n', '10', '/usr/share/autotest/server/autoserv', '-p', '-r', u'/usr/share/autotest/results/9568-autotest/virt26.qa', '-m', u'virt26.qa', '-u', u'autotest', '-l', u'post-koji-build:rpmguard.noarch', '-P', u'9568-autotest/virt26.qa', '-n', '/usr/share/autotest/results/drone_tmp/attach.1211', '-c']
12/15 22:50:48 INFO |drone_mana:0565| monitoring pidfile /usr/share/autotest/results/9568-autotest/virt26.qa/.autoserv_execute
12/15 23:02:21 WARNI|monitor_db:0085| Aborting entry virt26.qa/9568 (9568) due to max runtime
12/15 23:02:21 WARNI|scheduler_:0183| initialized <class 'autotest_lib.scheduler.scheduler_models.HostQueueEntry'> 9568 instance requery is updating: {'aborted': (0, 1)}
12/15 23:02:21 INFO |monitor_db:0734| Aborting virt26.qa/9568 (9568) Running [active,aborted]
12/15 23:02:21 INFO |scheduler_:0561| virt26.qa/9568 (9568) Running [active,aborted] -> Gathering
12/15 23:02:32 INFO |monitor_db:0734| Aborting virt26.qa/9568 (9568) Gathering [active,aborted]
12/15 23:02:32 INFO |drone_mana:0565| monitoring pidfile /usr/share/autotest/results/9568-autotest/virt26.qa/.collect_crashinfo_execute
12/15 23:02:32 INFO |drone_mana:0539| command = ['nice', '-n', '10', '/usr/share/autotest/server/autoserv', '-p', '--pidfile-label=collect_crashinfo', '--use-existing-results', '--collect-crashinfo', '-m', u'virt26.qa', '-r', u'/usr/share/autotest/results/9568-autotest/virt26.qa']
12/15 23:02:32 INFO |drone_mana:0540| log file = localhost:/usr/share/autotest/results/9568-autotest/virt26.qa/.collect_crashinfo.log
12/15 23:02:37 INFO |monitor_db:0734| Aborting virt26.qa/9568 (9568) Gathering [active,aborted]
12/15 23:02:37 INFO |scheduler_:0561| virt26.qa/9568 (9568) Gathering [active,aborted] -> Parsing
12/15 23:02:42 INFO |monitor_db:0734| Aborting virt26.qa/9568 (9568) Parsing [aborted]
12/15 23:02:42 INFO |drone_mana:0565| monitoring pidfile /usr/share/autotest/results/9568-autotest/virt26.qa/.parser_execute
12/15 23:02:42 INFO |drone_mana:0539| command = ['nice', '-n', '10', '/usr/share/autotest/tko/parse', '--write-pidfile', '-l', '2', '-r', '-o', u'/usr/share/autotest/results/9568-autotest/virt26.qa']
12/15 23:02:42 INFO |drone_mana:0540| log file = localhost:/usr/share/autotest/results/9568-autotest/virt26.qa/.parse.log
12/15 23:02:47 INFO |monitor_db:0734| Aborting virt26.qa/9568 (9568) Parsing [aborted]
12/15 23:02:47 INFO |scheduler_:0561| virt26.qa/9568 (9568) Parsing [aborted] -> Archiving
12/15 23:02:52 INFO |monitor_db:0734| Aborting virt26.qa/9568 (9568) Archiving [aborted]
12/15 23:02:52 INFO |drone_mana:0565| monitoring pidfile /usr/share/autotest/results/9568-autotest/virt26.qa/.archiver_execute
12/15 23:02:52 INFO |drone_mana:0539| command = ['nice', '-n', '10', '/usr/share/autotest/server/autoserv', '-p', '--pidfile-label=archiver', '-r', u'/usr/share/autotest/results/9568-autotest/virt26.qa', '--use-existing-results', '--control-filename=control.archive', '/usr/share/autotest/scheduler/archive_results.control.srv']
12/15 23:02:52 INFO |drone_mana:0540| log file = localhost:/usr/share/autotest/results/9568-autotest/virt26.qa/.archiving.log
12/15 23:02:57 INFO |monitor_db:0734| Aborting virt26.qa/9568 (9568) Archiving [aborted]
12/15 23:02:57 INFO |scheduler_:0561| virt26.qa/9568 (9568) Archiving [aborted] -> Aborted
12/15 23:02:57 INFO |drone_mana:0577| forgetting pidfile /usr/share/autotest/results/9568-autotest/virt26.qa/.autoserv_execute
12/15 23:02:57 INFO |drone_mana:0577| forgetting pidfile /usr/share/autotest/results/9568-autotest/virt26.qa/.collect_crashinfo_execute
12/15 23:02:57 INFO |drone_mana:0577| forgetting pidfile /usr/share/autotest/results/9568-autotest/virt26.qa/.parser_execute
12/15 23:02:57 INFO |drone_mana:0577| forgetting pidfile /usr/share/autotest/results/9568-autotest/virt26.qa/.archiver_execute
$ grep 9574 ~autotest/logs/scheduler.log.2011-12-14-04.00.22
12/15 23:50:31 INFO |scheduler_:0496| Assigning host virt26.qa to entry no host/9574 (9574) Queued
12/15 23:50:31 INFO |scheduler_:0516| creating block 9574/4
12/15 23:50:36 INFO |scheduler_:0673| post-koji-build:rpmguard.noarch/6/None (job 9574, entry 9574) scheduled on virt26.qa, status=Queued
12/15 23:50:36 INFO |scheduler_:0561| virt26.qa/9574 (9574) Queued -> Verifying
12/15 23:50:41 INFO |scheduler_:0561| virt26.qa/9574 (9574) Verifying [active] -> Verifying
12/15 23:50:46 INFO |scheduler_:0561| virt26.qa/9574 (9574) Verifying [active] -> Pending
12/15 23:50:46 INFO |scheduler_:0561| virt26.qa/9574 (9574) Pending [active] -> Starting
12/15 23:50:52 INFO |scheduler_:0561| virt26.qa/9574 (9574) Starting [active] -> Running
12/15 23:50:52 INFO |drone_mana:0539| command = ['nice', '-n', '10', '/usr/share/autotest/server/autoserv', '-p', '-r', u'/usr/share/autotest/results/9574-autotest/virt26.qa', '-m', u'virt26.qa', '-u', u'autotest', '-l', u'post-koji-build:rpmguard.noarch', '-P', u'9574-autotest/virt26.qa', '-n', '/usr/share/autotest/results/drone_tmp/attach.1230', '-c']
12/15 23:50:52 INFO |drone_mana:0565| monitoring pidfile /usr/share/autotest/results/9574-autotest/virt26.qa/.autoserv_execute
12/16 00:02:25 WARNI|monitor_db:0085| Aborting entry virt26.qa/9574 (9574) due to max runtime
12/16 00:02:25 WARNI|scheduler_:0183| initialized <class 'autotest_lib.scheduler.scheduler_models.HostQueueEntry'> 9574 instance requery is updating: {'aborted': (0, 1)}
12/16 00:02:25 INFO |monitor_db:0734| Aborting virt26.qa/9574 (9574) Running [active,aborted]
12/16 00:02:25 INFO |scheduler_:0561| virt26.qa/9574 (9574) Running [active,aborted] -> Gathering
12/16 00:02:36 INFO |monitor_db:0734| Aborting virt26.qa/9574 (9574) Gathering [active,aborted]
12/16 00:02:36 INFO |drone_mana:0565| monitoring pidfile /usr/share/autotest/results/9574-autotest/virt26.qa/.collect_crashinfo_execute
12/16 00:02:36 INFO |drone_mana:0539| command = ['nice', '-n', '10', '/usr/share/autotest/server/autoserv', '-p', '--pidfile-label=collect_crashinfo', '--use-existing-results', '--collect-crashinfo', '-m', u'virt26.qa', '-r', u'/usr/share/autotest/results/9574-autotest/virt26.qa']
12/16 00:02:36 INFO |drone_mana:0540| log file = localhost:/usr/share/autotest/results/9574-autotest/virt26.qa/.collect_crashinfo.log
12/16 00:02:41 INFO |monitor_db:0734| Aborting virt26.qa/9574 (9574) Gathering [active,aborted]
12/16 00:02:41 INFO |scheduler_:0561| virt26.qa/9574 (9574) Gathering [active,aborted] -> Parsing
12/16 00:02:46 INFO |monitor_db:0734| Aborting virt26.qa/9574 (9574) Parsing [aborted]
12/16 00:02:46 INFO |drone_mana:0565| monitoring pidfile /usr/share/autotest/results/9574-autotest/virt26.qa/.parser_execute
12/16 00:02:46 INFO |drone_mana:0539| command = ['nice', '-n', '10', '/usr/share/autotest/tko/parse', '--write-pidfile', '-l', '2', '-r', '-o', u'/usr/share/autotest/results/9574-autotest/virt26.qa']
12/16 00:02:46 INFO |drone_mana:0540| log file = localhost:/usr/share/autotest/results/9574-autotest/virt26.qa/.parse.log
12/16 00:02:51 INFO |monitor_db:0734| Aborting virt26.qa/9574 (9574) Parsing [aborted]
12/16 00:02:51 INFO |scheduler_:0561| virt26.qa/9574 (9574) Parsing [aborted] -> Archiving
12/16 00:02:56 INFO |monitor_db:0734| Aborting virt26.qa/9574 (9574) Archiving [aborted]
12/16 00:02:56 INFO |drone_mana:0565| monitoring pidfile /usr/share/autotest/results/9574-autotest/virt26.qa/.archiver_execute
12/16 00:02:56 INFO |drone_mana:0539| command = ['nice', '-n', '10', '/usr/share/autotest/server/autoserv', '-p', '--pidfile-label=archiver', '-r', u'/usr/share/autotest/results/9574-autotest/virt26.qa', '--use-existing-results', '--control-filename=control.archive', '/usr/share/autotest/scheduler/archive_results.control.srv']
12/16 00:02:56 INFO |drone_mana:0540| log file = localhost:/usr/share/autotest/results/9574-autotest/virt26.qa/.archiving.log
12/16 00:03:01 INFO |monitor_db:0734| Aborting virt26.qa/9574 (9574) Archiving [aborted]
12/16 00:03:01 INFO |scheduler_:0561| virt26.qa/9574 (9574) Archiving [aborted] -> Aborted
12/16 00:03:01 INFO |drone_mana:0577| forgetting pidfile /usr/share/autotest/results/9574-autotest/virt26.qa/.autoserv_execute
12/16 00:03:01 INFO |drone_mana:0577| forgetting pidfile /usr/share/autotest/results/9574-autotest/virt26.qa/.collect_crashinfo_execute
12/16 00:03:01 INFO |drone_mana:0577| forgetting pidfile /usr/share/autotest/results/9574-autotest/virt26.qa/.parser_execute
12/16 00:03:01 INFO |drone_mana:0577| forgetting pidfile /usr/share/autotest/results/9574-autotest/virt26.qa/.archiver_execute

KVM autotest: Write migration tests with stable check with special drive configuration

Problem: Automate the test case described below:

This is the test case that I was using:

$ cat blkdebug.conf
[inject-error]
state = "2"
event = "read_aio"
errno = "7"
immediately = "off"
once = "on"

[set-state]
state = "1"
event = "read_aio"
new_state = "2"

[set-state]
state = "2"
event = "read_aio"
new_state = "3"

$ x86_64-softmmu/qemu-system-x86_64 -drive file=blkdebug:blkdebug.conf:disk.qcow2,rerror=stop,werror=stop -incoming tcp::1028
$ x86_64-softmmu/qemu-system-x86_64 -drive file=blkdebug:blkdebug.conf:disk.qcow2,rerror=stop,werror=stop
(qemu) migrate tcp::1028

Please note that you need to use a qcow2 image, or the blkdebug events won't be
triggered. Expected result: The very first read from the disk (BIOS reading
the MBR) fails and stops the VM. I migrate while the VM is stopped, so that the
request is queued for resubmission. After continuing, the VM should work just
as normal.
Here's the implementation plan:

  1. Make KVM autotest support config files for drives (file=foo:foo.conf:foo.qcow2)

  2. KVM autotest has functions to perform live migration and check the VM state files before and after migration. If you pass the param 'stable_check=True' to the migration utility function, it'll perform this check:

else:

    wait_for_migration() if (dest_host == 'localhost') and stable_check:

    save_path = None or "/tmp" save1 = os.path.join(save_path, "src") save2 = os.path.join(save_path, "dst")

    vm.save_to_file(save1) dest_vm.save_to_file(save2)

    #Fail if we see deltas md5_save1 = utils.hash_file(save1) md5_save2 = utils.hash_file(save2) if md5_save1 != md5_save2:

    raise error.TestFail?("Mismatch of VM state before "

    "and after migration")

So it's mostly a matter of implementing 1), which is fairly easy, and cook up a dedicated test that boots the vm with the disks and config file for the disks, perform migration with stable check during boot.

Support noarch tests

We (at AutoQA project) have many noarch tests. They can be run on any machine available. But autotest currently doesn't support that. "atest job create" syntax dictates, that we can use "(number)_(platform)" option, but we can't specify that the platform may be arbitrary (like 2_any, or 2*noarch).

This causes us some difficulties (not mentioning lower performance), please support noarch tests.

Autotest's iperf test is hanging

I am trying to run autotest, and I've isolated the hanging to be occurring in client/common_lib/logging_manager.py. It hangs on the line "for line in iter(input_file.readline, '')" in the method "_run_logging_subprocess" in the class "_FdRedirectionStreamManager".

Parameter delete_dest=True in host.send_file() appears to be broken

This has been reported in autoqa-devel mailing list: https://fedorahosted.org/pipermail/autoqa-devel/2011-January/001548.html

It was discovered when overriding BaseAutotest?._install() method.

Quote:

It's simple. Try to do:
host.send_file('/etc/autoqa', '/etc/autoqa', delete_dest=True)

This won't work with rsync, it will create /etc/autoqa/autoqa even
if the destination doesn't exist. On the other hand, it will work
with scp if the destination doesn't exist, but it won't work if
the destination does exist (it will create /etc/autoqa/autoqa).

So let's try:
host.send_file('/etc/autoqa/', '/etc/autoqa/', delete_dest=True)

This will work with rsync in both cases (destination exists and
doesn't exist). But scp will fail if the destination doesn't exist.

Broooken! :-)

The easiest workaround is to remove and re-create the destination
manually and always use slashes.
There seems to be very different behavior between rsync and its scp fallback when it comes to copying directories. It seems that the delete_dest=True option does not work properly (but there can be other culprits too).

Do not hardcode mailserver in common_lib.utils.send_email

Please add an option to send_email() method in client/common_lib/utils.py to specify a custom mailserver.

Current method definition is:

def send_email(mail_from, mail_to, subject, body):

The proposed change is:

def send_email(mail_from, mail_to, subject, body, mail_server='localhost'):

Write support for testing qemu fd migration protocol

Now the hard part is finding documentation about it... only learned recently (today) that this protocol existed, and yet, don't have an idea how it works.

Since cleber resolved fd passing in KVM autotest, we might be able to test it, at least for the branches that do support it.

Unicode errors in autotest

Dale Curtis has brought the following problems to our attention:

There are actually a couple more errors relating to unicode in the UI and error reporting. The two others that I've seen thus far are:

RPC inteface (UI/CLI) won't except un-encoded unicode in the control file (probably other UI fields too); e.g., "\xE9\x96\x93" versus "้–“"
Raising exceptions with unicode messages will result in the same UnicodeEncodeError?.
Here's the test we noticed these issues on,

http://git.chromium.org/gitweb/?p=chromiumos/third_party/autotest.git&a=tree&f=client/site_tests/desktopui_ImeTest

Let's identify and fix these issues

List jobs for all users by default

When I open the web interface in the Job List tab, it displays all jobs from debug_user owner by default. I have to switch to All users or autotest user if I want to see my jobs. This is inconvenient, especially with high amount of completed jobs (hundreds of thousands). I have to wait 10-15 seconds until debug_user jobs are displayed and only then I can switch to my real user (and wait again).

Please make All users option the default one. Or is it configurable somewhere? I didn't find it.

KVM autotest: Problems with set_link under RHEL 6.1

Set_link subtest fails in Autotest, since its pingable even though network adapter is down. Manually it works fine. Looks like Autotest has an issue with set_link subtest.

Host:

Linux 2.6.32-131.0.13.el6.x86_64

qemu kernel version - QEMU emulator version 0.14.50

Error message:

06:48:56 INFO | Logged into guest vm1 using remote connection
06:48:56 INFO | Issue set_link commands for netdevs
06:49:05 ERROR| Test failed: TestFail: Still can ping the 192.168.122.111 after down idwcHFen
06:49:18 INFO | (qemu) (Process terminated with status 0)
06:49:19 INFO | ['iteration.1']
06:49:19 ERROR| Exception escaping from test:
Traceback (most recent call last):
  File "/home/autotest/client/common_lib/test.py", line 419, in _exec 
_call_test_function(self.execute, *p_args, **p_dargs)
  File "/home/autotest/client/common_lib/test.py", line 605, in
_call_test_function
    return func(*args, **dargs)
  File "/home/autotest/client/common_lib/test.py", line 291, in execute
    postprocess_profiled_run, args, dargs)
  File "/home/autotest/client/common_lib/test.py", line 209, in
_call_run_once
    *args, **dargs)
  File "/home/autotest/client/common_lib/test.py", line 315, in
run_once_profiling
    self.run_once(*args, **dargs)
  File "/home/autotest/client/tests/kvm/kvm.py", line 80, in run_once
    run_func(self, params, env)
  File "/home/autotest/client/tests/kvm/tests/set_link.py", line 49, in
run_set_link
    set_link_test(netdev_id)
  File "/home/autotest/client/tests/kvm/tests/set_link.py", line 36, in
set_link_test
    (ip, linkid))
TestFail: Still can ping the 192.168.122.111 after down idwcHFen 
06:49:25 ERROR| child process failed
06:49:25 INFO |                 FAIL
kvm.raw.smp2.RHEL.6.1.64.e1000.set_linkkvm.raw.smp2.RHEL.6.1.64.e1000.set_link  timestamp=1309344565        localtime=Jun 29 06:49:25       Still can ping the 192.168.122.111 after down idwcHFen
06:49:25 INFO |         END FAIL
kvm.raw.smp2.RHEL.6.1.64.e1000.set_linkkvm.raw.smp2.RHEL.6.1.64.e1000.set_link  timestamp=1309344565       localtime=Jun 29 06:49:25       
06:49:26 INFO | END GOOD        ----    ----    timestamp=1309344566    localtime=Jun 29
06:49:26        

Add comments to global_config.ini

Problem

Currently it's hard to set up global_config.ini properly, because lots of options are missing documentation. The missing documentation is available at http://autotest.kernel.org/wiki/GlobalConfig. But:

  1. you have to know that you should look for this wiki page
  2. the wiki page is very hard to find (github wiki is not searchable, the only way how to access it I know of is through https://github.com/autotest/autotest/wiki/_pages)

Proposal

Have a policy that every option in config file(s) must be documented.

Implementation

  1. Transfer all documentation from GlobalConfig wiki page into global_config.ini.
  2. There will be still some options undocumented, add comments for those.
  3. There might be some documentation that is too extensive for including in the .ini file. That is OK, leave it in GlobalConfig.
  4. In the beginning of global_config.ini put a link to GlobalConfig.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.