Coder Social home page Coder Social logo

fedora-modularity / meta-test-family Goto Github PK

View Code? Open in Web Editor NEW
8.0 15.0 12.0 1.32 MB

Meta test family (MTF) is a tool to test components of a modular Fedora:

Home Page: https://docs.pagure.org/modularity/

License: Other

Makefile 1.22% Python 94.76% Shell 3.33% Ruby 0.68%
modularity testing python2

meta-test-family's Introduction

Code Health Build Status

WELCOME TO MTF GITHUB

Meta test family (MTF) is a tool to test components of a modular Fedora.

For more information, check out the documentation page:

http://meta-test-family.readthedocs.io/en/latest/

QUESTIONS ? HELP ? IDEAS ?

Stop by the #fedora-modularity chat channel on freenode IRC.

CONTRIBUTING ?

Please see the contributing guidelines for information regarding the contribution process.

BUG TO REPORT ?

First and foremost, also check the contributing guidelines.

You can report bugs or make enhancement requests at the MTF GitHub issue page.

Also please make sure you are testing on the latest released version of MTF or the development branch.

Thanks!

meta-test-family's People

Contributors

alexxa avatar bkabrda avatar dhodovsk avatar ignatenkobrain avatar jscotka avatar karstenhopp avatar lukaszachy avatar nphilipp avatar phracek avatar psklenar avatar rpitonak avatar tomastomecek avatar usercont-release-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

meta-test-family's Issues

running mtf in a container results in odd traceback

Reproduced traceback from: /usr/lib/python2.7/site-packages/avocado/core/test.py:576
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/moduleframework/module_framework.py", line 1121, in setUp
    return self.backend.setUp()
  File "/usr/lib/python2.7/site-packages/moduleframework/module_framework.py", line 286, in setUp
    self.__prepareContainer()
  File "/usr/lib/python2.7/site-packages/moduleframework/module_framework.py", line 321, in __prepareContainer
    service_manager = service.ServiceManager()
  File "/usr/lib/python2.7/site-packages/avocado/utils/service.py", line 774, in service_manager
    internal_service_manager = _service_managers[get_name_of_init(run)]
KeyError: 'make'

code from here

baseruntime check

Why are all images required to be based on baseruntime/baseruntime? What if I don't want to use baseruntime at all or my image is based on image that is based on baseruntime (i.e. I do use baseruntime transitively)?

logs are too verbose by seeing output from equiv of `rpm -ql $(rpm -qa)`

We've just run into an issue that MTF outputs megabytes of logs when run with avocado run --show-job-log. The issue seems to be that MTF logs to stdout something like this: rpm -ql $(rpm -qa), snippet:

"rpm -qa"' finished with 0 after 0.224642038345s
Running 'docker exec 53c6521953c0b44c71734fcdcedf59524c2e7ffa58ca8e5e4392c0917905c11b bash -c "rpm -ql fedora-repos-26-1.noarch"'
...

Can we get those command outputs not logged?

MTF should work without "rpm" module

Hi, I have a config.yaml like this:

document: modularity-testing
version: 1
name: myawesomecontainer
service:
  port: 9000
packages:
  rpms:
    - nmap-ncat
module:
    docker:
      start: "docker run -d -p 9000:9000"
    rpm:
      repo: <something>

I only care about testing the docker module. When I remove it, avocado fails and the following error is in the log file:

2017-08-11 09:12:45,925 test             L0240 INFO | START 01-sanity1.py:SanityCheck1.test1
2017-08-11 09:12:45,931 output           L0655 DEBUG| Process Process-1:
2017-08-11 09:12:45,931 output           L0655 DEBUG| Traceback (most recent call last):
2017-08-11 09:12:45,931 output           L0655 DEBUG|   File "/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
2017-08-11 09:12:45,931 output           L0655 DEBUG|     self.run()
2017-08-11 09:12:45,931 output           L0655 DEBUG|   File "/usr/lib64/python2.7/multiprocessing/process.py", line 114, in run
2017-08-11 09:12:45,931 output           L0655 DEBUG|     self._target(*self._args, **self._kwargs)
2017-08-11 09:12:45,931 output           L0655 DEBUG|   File "/home/bkabrda/.local/lib/python2.7/site-packages/avocado/core/runner.py", line 325, in _run_test
2017-08-11 09:12:45,932 output           L0655 DEBUG|     instance = loader.load_test(test_factory)
2017-08-11 09:12:45,932 output           L0655 DEBUG|   File "/home/bkabrda/.local/lib/python2.7/site-packages/avocado/core/loader.py", line 338, in load_test
2017-08-11 09:12:45,932 output           L0655 DEBUG|     test_instance = test_class(**test_parameters)
2017-08-11 09:12:45,932 output           L0655 DEBUG|   File "/home/bkabrda/.local/lib/python2.7/site-packages/moduleframework/module_framework.py", line 1101, in __init__
2017-08-11 09:12:45,932 output           L0655 DEBUG|     (self.backend, self.moduleType) = get_correct_backend()
2017-08-11 09:12:45,932 output           L0655 DEBUG|   File "/home/bkabrda/.local/lib/python2.7/site-packages/moduleframework/module_framework.py", line 1352, in get_correct_backend
2017-08-11 09:12:45,932 output           L0655 DEBUG|     readconfig.loadconfig()
2017-08-11 09:12:45,932 output           L0655 DEBUG|   File "/home/bkabrda/.local/lib/python2.7/site-packages/moduleframework/module_framework.py", line 146, in loadconfig
2017-08-11 09:12:45,933 output           L0655 DEBUG|     'source') else self.config['module']['rpm'].get('source')
2017-08-11 09:12:45,933 output           L0655 DEBUG| KeyError: 'rpm'

I think it should be possible to omit the rpm module, hence I think this is a bug.

Not sure if rpmvalidation test makes sense for containers

So if I understand the rpmvalidation test correctly, it was meant for modules - to verify that no package from a given module is in violation of FHS standards.

I'm not sure if that should apply to containers, I think there could be two approaches to this:

  1. a container is a unit, we don't really care what's inside it => skip this test
  2. a container is a bunch of content that got installed during build; we don't care how individual files got inside the container, we want to verify that all of them (be it from RPM or not) adhere to FHS

I think 1) is not a good option, since we're trying to encourage best practices. So I'd go with 2) - but since content can get inside container via multiple different ways (e.g. through pip install, npm install etc), it doesn't make much sense to me to do an RPM-based test. I'd rather verify that there are no directories/files violating FHS just by listing directories/files in FHS-specified directories.

Does this make sense?

Show logs or an output in case some tests fail

MTF should be able to show failed logs. Logs which should be visible as soon as mtf finishes should be FAILED, so user should not open log and find out FAILED tests.
Think about if there should be some options for allowing this issue.

The PR #63 brings a bash script called mtf which should be enhanced based on this issue.

Enhance / improve dockerlint.py class according to Container Best Practises

The classes mentioned in https://github.com/fedora-modularity/meta-test-family/blob/devel/tools/modulelint.py
has to be enhanced / rewritten so that it will cover Container Best Practices.

Nowadays, it contains several checks in DockerfileSanitize class:

  • test_architecture_in_env_and_label_exists
  • test_name_in_env_and_label_exists
  • test_release_label_exists
  • test_version_label_exists
  • test_com_red_hat_component_label_exists
  • test_iok8s_description_exists - WILL BE REMOVED
  • test_io_openshift_expose_services_exists - WILL BE REMOVED
  • test_io_openshift_tags_exists - WILL BE REMOVED

The other checks in DockerfileLinterInContainer class

  • test_docker_nodocs #97
  • test_docker_clean_all #97

Implementation of other checks based on http://docs.projectatomic.io/container-best-practices/

  • FROM is first directive in Dockerfile
  • FROM contains valid format like is mentioned here http://docs.projectatomic.io/container-best-practices/#_line_rule_section
  • check help.md is mentioned in Dockerfile
  • check if help.md is in correct location
  • check if EXPOSE ports contain only numbers
  • check if ADD contains real files in directory
  • check if COPY copies present files in directory
  • check if USER directive contains only numbers - this is not valid since USER can be specified by digits or by string.

Checks if help.md file contains following: http://docs.projectatomic.io/container-best-practices/#_sample_template

  • IMAGE_NAME tag
  • MAINTAINER
  • NAME
  • DESCRIPTION
  • USAGE
  • ENVIRONMENT VARIABLES
  • SECURITY IMPLICATIONS

Optional things to check

  • OpenShift labels io.k8s.{description,display-name}
  • OpenShift labels io.openshift.{expose,tags}
  • Atomic labels like RUN, INSTALL, UNINSTALL

Container helper should not try to install packages

So I'm using MTF for container testing and it's taking an insane amount of time (e.g. 100 seconds) before the self.start() method finishes. I noticed that if self.getPackageList() is not empty, it will try to run some dnf install commands. I honestly have no idea why it's doing that. The container should already contain all the packages (that's what is being tested) and nothing should get installed on host.

So when I remove the whole condition at [1], my test takes 12 seconds instead of 112 and still passes ok. I think it should be safe to remove this and I'll be happy to send a PR, but I first wanted to clarify why this condition is there in the first place.

[1] 638146d#diff-3319a1c7872defb83c1bac839ad9efddR173

[RFE] MTF API library: function for checking specific container issues

MTF has to provide a API library with function which will check these container issues:

  • support whether daemon serves content on specific port
  • checking whether response returns 200
  • add support for specific combination of container parameters which should not work
  • function for check if specific file exists inside container
  • function for check rpm signarures
  • function for check help page and things which are important.
  • create a document under api http://meta-test-family.readthedocs.io/en/latest/api/index.html which will provide list and description. I would prefer to name it like API.

Think about to create new modules in folder like moduleframework/api/

Some tests in modulelint.py don't work for one-shot containers

If I have a one-shot container like mysql client, it is very well possible that just running self.start() will print help (or error message) and exit - because there's probably no mysql server to connect to. In this case, some tests in modulelint.py (for example tests in test class DockerLint) will likely fail, since the container will stop immediately and won't keep running.

ContainerHelper should use docker-py

I'd suggest replacing shelling out in the ContainerHelper by usage of docker-py [1] library. The benefit of that is getting much better return values which can easily be inspected for results compared to subprocess results. Also it has a nice API which would be beneficial compared to hand-formatting shell commands to be executed.

[1] https://pypi.python.org/pypi/docker-py

Docker: Adding of Insecure registry in mtf-env-set

actually when there is used insecure registry not listed config. docker fails to use it. there were support for that before enviroment setup cleanup split. No we have to reconsider, if function will be remved or solved it in smarter way.

majority of tests in examples direcory does not work

look at tests in /meta-test-family/examples/
there are:
base-runtime haproxy memcached memcached-behave multios_testing nginx template testing-module

I guess only one is working.
What to do with it - delete it as they are on different place now ?

logs from MTF could show list of all commands which were used

log /root/avocado/job-results/latest/job.log , mtf

/root/avocado/job-results/latest/job.log has to show some nice list of run commands which is procedure of tests.

Something like:
sudo grep "Command '" /root/avocado/job-results/latest/job.log
+
there could be some difference when commands are run at your computer or in container/nspawn etc. Now its very very hard to read it.

Nspawn: selinux issues

actually selinux has to be disabled for nspawn.
Would be nice to make it working with selinux enabled.

  • Inspect and file issues against selinux-policy or systemd-nspawn
  • Remove workaround from code

Discourage use of RPM "module"

Testing with MODULE=rpm should be discouraged in favour of nspawn (or perhaps a mock environment). The reason for this is that local environment will never allow clean reproducible tests (there are too many unknowns that MTF can't know/influence), moreover the rpm module does a lot of stuff to the local system (e.g. RPM installation), which might break people's systems. While I understand that this might come handy, I think that usage of the rpm module should be discouraged and the nspawn module should be recommended instead.

Notes on packaging

Here are some notes/questions that I think might be useful to discuss/think through:

  • What is the reason to put modulelint it data files (/usr/share)? AFAICS it's just another Python module, so it should go alongside moduleframework or live inside it, IMO.
  • I'd recommend creating a requirements.txt file, which is a de-facto standard in Python community to track dependencies.
  • I don't understand following line in the spec: chmod a+x %{buildroot}%{python_sitelib}/%{framework_name}/{module_framework,mtf_generator,bashhelper,setup}.py - The python_sitelib shouldn't contain executable files. Either these files shouldn't be executable at all or they should have their functionality available through an endpoint.

UX review

This is a proposal created in reaction to [1].

As a user of MTF, I would ideally like to just run a single command, let's call it mtf, in the module directory and everything would just work. The approach proposed in [1] is too complex and somewhat clumsy from user perspective.

The mtf command could work like this:

  • The config.yaml file would contain default list of modules (i.e. the current MODULE argument) which to test against (e.g. ["docker", "rpm"])
  • User would run mtf.
    • This would first invoke the functionality currently covered by mtf-generator
    • And then it would invoke the python -m avocado ... part.
  • Furthermore, mtf could accept various arguments, such as override for the set of modules which to test against (i.e. the current MODULE argument), subset of test files to run etc.

If the user wants to create a Makefile, they would just put mtf --some-options in it and they'd be done with that. They wouldn't need to create unnecessary Makefiles, which would mostly contain the same lines for every module.

Does this sound reasonable? I believe it would greatly improve UX to have this.

[1] http://modularity-testing-framework.readthedocs.io/en/latest/user_guide/index.html

Replace microdnf test with $pkg clean all test

MTF tests whether microdnf is used. MTF will be used in RHEL and CentOS and therefore
instead of testing microdnf it would be good to test whether clean all is used so that dnf/yum garbage is removed.

Update SPEC file according new MTF name

#28 updates documentation including SPEC file.
Old name SPEC name was modularity-testing-framework.spec with name modularity-testing-framework

How to update SPEC file according to new name.
There are two options how to handle it:

  1. keep the same name and add to spec file http://pkgs.fedoraproject.org/cgit/rpms/coreutils.git/tree/coreutils.spec?h=f26#n130
    In our case: Provides: meta-test-family = %{version} - %{release}
  2. Mark modularity-testing-framework as obsolete and introduce the new name. I don't know in my mind, If the new package request will be needed. Including new Fedora dist-git repo.

Conversation from #28
@phracek , I updated copr repo name in docs... As per spec, I would go with the option number two. And I also would change all occurrences of moduleframework to mtf, agree? If so, I will proceed.
@alexxa Let's discuss it together with other folks. I will create a new issue, so we can merge documentation.

What do you thing folks? @ignatenkobrain @TomasTomecek

Update Makefile

I would appreciate comments in Makefile and also organizing all commands in some sections. Now it looks like a pile. Not easy to follow what/how to use those commands and in which order.

Fail in testDockerFromBaseruntime: FAIL - how to debug?

Hi, I have a following Dockerfile:

FROM registry.fedoraproject.org/fedora:26

LABEL MAINTAINER ...

ENV NAME=mycontainer VERSION=0 RELEASE=1 ARCH=x86_64

LABEL summary="A container that tells you how awesome it is." \
      com.redhat.component="$NAME" \
      version="$VERSION" \
      release="$RELEASE.$DISTTAG" \
      architecture="$ARCH" \
      usage="docker run -p 9000:9000 mycontainer" \
      help="Runs mycontainer, which listens on port 9000 and tells you how awesome it is. No dependencies." \
      description="This is a simple container that just tells you how awesome it is. That's it." \
      vendor="Fedora Project" \
      org.fedoraproject.component="postfix" \
      authoritative-source-url="some.url.org" \
      io.k8s.description="This is a simple container that just tells you how awesome it is. That's it." \
      io.k8s.display-name="Awesome container with SW version 2.4" \
      io.openshift.expose-services="9000:http" \
      io.openshift.tags="some,tags"

EXPOSE 9000

# We don't actually use the "software_version" here, but we could,
#  e.g. to install a module with that ncat version
RUN dnf install -y nmap-ncat && \
    dnf clean all

# add help file
COPY root /
COPY script.sh /usr/bin/

CMD ["/usr/bin/script.sh"]

When running tests on it via MTF, I'm getting following failure:

(04/18) /usr/share/moduleframework/tools/modulelint/modulelint.py:DockerfileLinter.testDockerFromBaseruntime: FAIL (0.03 s)

The logs say this:

2017-08-11 08:41:40,700 test             L0240 INFO | START 04-/usr/share/moduleframework/tools/modulelint/modulelint.py:DockerfileLinter.testDockerFromBaseruntime
2017-08-11 08:41:40,717 output           L0655 DEBUG| Module Type: docker; Profile: default
2017-08-11 08:41:40,741 output           L0655 DEBUG| Dockerfile tag COPY is not parsed by MTF
2017-08-11 08:41:40,741 output           L0655 DEBUG| Dockerfile tag COPY is not parsed by MTF
2017-08-11 08:41:40,741 output           L0655 DEBUG| Dockerfile tag CMD is not parsed by MTF
2017-08-11 08:41:40,741 stacktrace       L0039 ERROR|
2017-08-11 08:41:40,741 stacktrace       L0042 ERROR| Reproduced traceback from: /home/bkabrda/.local/lib/python2.7/site-packages/avocado/core/test.py:596
2017-08-11 08:41:40,742 stacktrace       L0045 ERROR| Traceback (most recent call last):
2017-08-11 08:41:40,743 stacktrace       L0045 ERROR|   File "/usr/share/moduleframework/tools/modulelint/modulelint.py", line 48, in testDockerFromBaseruntime
2017-08-11 08:41:40,743 stacktrace       L0045 ERROR|     self.assertTrue(self.dp.check_baseruntime())
2017-08-11 08:41:40,743 stacktrace       L0045 ERROR|   File "/usr/lib64/python2.7/unittest/case.py", line 460, in assertTrue
2017-08-11 08:41:40,743 stacktrace       L0045 ERROR|     raise self.failureException(msg)
2017-08-11 08:41:40,743 stacktrace       L0045 ERROR| AssertionError: [] is not true
2017-08-11 08:41:40,743 stacktrace       L0046 ERROR|
2017-08-11 08:41:40,743 test             L0601 DEBUG| Local variables:
2017-08-11 08:41:40,773 test             L0604 DEBUG|  -> self <class 'modulelint.DockerfileLinter'>: 04-/usr/share/moduleframework/tools/modulelint/modulelint.py:DockerfileLinter.testDockerFromBaseruntime
2017-08-11 08:41:40,773 test             L0744 ERROR| FAIL 04-/usr/share/moduleframework/tools/modulelint/modulelint.py:DockerfileLinter.testDockerFromBaseruntime -> AssertionError: [] is not true

As a user of MTF, I have no idea

  1. What I did wrong
  2. How to debug it
  3. When debugging, how to run only this specific test to speed things up

This is probably just a bug, but in addition to fixing it (which is kind of point 1.), it'd be great to have points 2. and 3. documented.

[DOC] Update documentation for MTF

This issue should update meta-test-family.readthedocs.io with this sort of information.

I would prefer to create new section, called 'examples', in http://meta-test-family.readthedocs.io/en/latest/user_guide/index.html

  • How to specify various asserts in the tests
  • Provide a way, every tests should be run-able standalone
  • Provide examples, how to handle with testing multi-container setup-up, we can kill and start new containers easily
  • Document, how to run either any (even) local image with various variables
  • Devel should be able to run all the tests for a specific image using one command

[RFE] Provide a binary `mtf` which will call avocado run *.py

In PR #63 there is a good note:

As for the /usr/share files - I think this is exactly the UX problem of MTF - it lacks a single entrypoint to which you could pass some arguments and it substitutes this with Makefiles that everyone has to duplicate.
If you had a single mtf binary, you could add an argument like --run-default-tests and noone would have to care where these files actually live. Plus, if they lived in site-packages, they'd actually be importable by MTF users without having to modify PYTHONPATH.

Let's provide a mtf binary which will do this.

Create tests for framework

COPY: https://pagure.io/modularity-testing-framework/issue/23

We need to know if new commit is push into dist-git, if functionality remains or if it is broken.

It is partially done:
https://pagure.io/modularity-testing-framework/blob/master/f/examples/testing-module

but in pagure it is problem to schedule these tests (there is jenkins without root access) so unable to run testsuite there.

Now it has to be triggered manually. It is not nice, but at least somehow done connected with PR on github

[RFE] MTF should provide scripts for preparing/cleaning/running tests

@jscotka would like to change MTF behaviour a propose following.

MTF should provide set of scripts which will care about following:

  • mtf-env-set which will prepare environment for specific tests. Like MODULE=docker mtf-setup which will prepare an environment for testing in docker
  • mtf will only run tests and does not care about setting environment, cleaning environment.
  • mtf-env-clean which will clean environment after testing. Like MODULE=docker mtf-cleanup

@TomasTomecek @jscotka @alexxa @bkabrda What do you think about it?
Do you like the script names?

The install command executed with delay.

COPY: https://pagure.io/modularity-testing-framework/issue/45

config.yaml

document: modularity-testing
version: 1
name: ruby
modulemd-url: http://raw.githubusercontent.com/container-images/ruby/master/ruby.yaml
packages:
    rpms:
        - ruby
testdependecies:
    rpms:
        - which
module:
    docker:
        start: "docker run -it -p 11211:11211"
        labels:
            description: "An interpreter of object-oriented scripting language"
            io.k8s.description: "An interpreter of object-oriented scripting language"
        source: https://github.com/sclorg/s2i-ruby-container.git
        container: docker.io/junaruga/ruby
    rpm:
        repos:
            - http://download.eng.brq.redhat.com/pub/fedora/linux/development/rawhide/Everything/x86_64/os/
test:
    cmd:
        - 'which ruby'
    version:
        - 'ruby -v'
testhost:
    cmd_on_host:
        - 'which ruby'
    version_on_host:
        - 'ruby -v'

/etc/yum.repos.d/ruby.repo

[ruby1]
name=ruby1
baseurl=http://download.eng.brq.redhat.com/pub/fedora/linux/development/rawhide/Everything/x86_64/os/
enabled=1
gpgcheck=0

Then ran

$ sudo MODULE=nspawn avocado run ./*.py
JOB ID     : dfd967984b55055eb2d39129dc768c1dbb1f6939
JOB LOG    : /root/avocado/job-results/job-2017-04-25T15.56-dfd9679/job.log
TESTS      : 4
 (1/4) ./generated.py:GeneratedTestsConfig.test_cmd:  ERROR (145.90 s)
 (2/4) ./generated.py:GeneratedTestsConfig.test_version:  PASS (150.21 s)
 (3/4) ./generated.py:GeneratedTestsConfig.test_cmd_on_host:  PASS (158.89 s)
 (4/4) ./generated.py:GeneratedTestsConfig.test_version_on_host:  PASS (131.42 s)
RESULTS    : PASS 3 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0
TESTS TIME : 586.43 s
JOB HTML   : /root/avocado/job-results/job-2017-04-25T15.56-dfd9679/html/results.html

And when I saw the job.log, the process to install which that is needed for 1st test, looks more late than the test is executed. Look at time in the log file.

...
2017-04-25 15:59:04,458 stacktrace       L0038 ERROR|-
2017-04-25 15:59:04,458 stacktrace       L0041 ERROR| Reproduced traceback from: /usr/lib/python2.7/site-packages/avocado/core/test.py:488
2017-04-25 15:59:04,463 stacktrace       L0044 ERROR| Traceback (most recent call last):
2017-04-25 15:59:04,463 stacktrace       L0044 ERROR|   File "/home/jaruga/git/fedora-packages/modules/ruby/tests/generated.py", line 17, in test_cmd
2017-04-25 15:59:04,464 stacktrace       L0044 ERROR|     self.run("which ruby")
2017-04-25 15:59:04,464 stacktrace       L0044 ERROR|   File "/usr/lib/python2.7/site-packages/moduleframework/module_framework.py", line 548, in run
2017-04-25 15:59:04,464 stacktrace       L0044 ERROR|     return self.backend.run(*args, **kwargs)
2017-04-25 15:59:04,464 stacktrace       L0044 ERROR|   File "/usr/lib/python2.7/site-packages/moduleframework/module_framework.py", line 493, in run
2017-04-25 15:59:04,464 stacktrace       L0044 ERROR|     comout.exit_status, **kwargs)
2017-04-25 15:59:04,464 stacktrace       L0044 ERROR|   File "/usr/lib/python2.7/site-packages/moduleframework/module_framework.py", line 72, in runHost
2017-04-25 15:59:04,464 stacktrace       L0044 ERROR|     return utils.process.run("%s" % command, **kwargs)
2017-04-25 15:59:04,464 stacktrace       L0044 ERROR|   File "/usr/lib/python2.7/site-packages/avocado/utils/process.py", line 1096, in run
2017-04-25 15:59:04,465 stacktrace       L0044 ERROR|     raise CmdError(cmd, sp.result)
2017-04-25 15:59:04,465 stacktrace       L0044 ERROR| CmdError: Command 'bash -c "echo DO NOT CARE of this command, this is workaound for good exit status; exit 127"' failed (rc=127)
2017-04-25 15:59:04,465 stacktrace       L0045 ERROR|-
...
2017-04-25 15:59:15,000 process          L0368 INFO | Running 'dnf -y install which'
...

jscotka
3 months ago

Hi,
there is several issues.

why do you have start: "docker run -it -p 11211:11211" as command? In case this container does not provide any service it can be empty. (Probably you were testing just nspawn type, so you've did not met any docker issue)

I'm not sure about "which" things. from my POV "which" is not installed by default in modularity, so you have to install it in case you would like to use "which". But there is possible that I'm doing that wrong, that I don't install base profile from base-runtime, what could install some commands like that. But also in that case it is better to explicitly add there dependencies on "which" if you are using that.

you have tests also in section of yaml: "testhost:" what causes that these tests are done on host machine, not inside "module" so that it is testing system ruby. For you ruby module, every tests has to be in section "test:" (what is done inside module)

3 months ago

Metadata Update from @jscotka:

  • Issue tagged with: Prio3
    jaruga
    3 months ago

@jscotka thanks for mentioning this.

=>

(Probably you were testing just nspawn type, so you've did not met any docker issue)

Yes, you are right.
I have not tried below docker mode command yet.
So, the module: docker: element is just a copy from http://pkgs.fedoraproject.org/cgit/modules/memcached.git/tree/tests/config.yaml . I forgot to remove the -p 11211:11211.

$ sudo MODULE=docker avocado run ./*.py

=>
Sure. As my understanding, below setting is enough to use dependency packages that are used in the config.yaml. So, I want to ask someone in this project about the specification.

testdependecies:
rpms:
- which

=>
Sure. the ruby tests in testhost element, are meaningless.
I will change the tests.
igulina
a month ago

@Jaruga , please let us know if this issue can be closed, or maybe we should update docs or anth else. Thank you.
jaruga
21 days ago

@igulina sorry it has not been fixed yet.
Can you try below ways on your environment?

$ git clone ssh://jaruga @pkgs.fedoraproject.org/modules/ruby

$ cd ruby

$ git diff
diff --git a/tests/config.yaml b/tests/config.yaml
index 14c3078. 82e8025 100644
--- a/tests/config.yaml
+++ b/tests/config.yaml
@@ -13,6 +13,9 @@ testdependecies:
rpms:

  • which
  • hostname
    +test:
  • cmd:

    • 'which ruby'
      module:
      docker:
      container: docker.io/ruby
      diff --git a/tests/ruby_tests.py b/tests/ruby_tests.py
      index ce9bbd5. cc0362c 100644
      --- a/tests/ruby_tests.py
      +++ b/tests/ruby_tests.py
      @@ -4,6 +4,7 @@ import socket
      from avocado import main
      from moduleframework import module_framework

class RubyTests(module_framework.AvocadoTest):
"""
๐Ÿฅ‘ enable
@@ -13,14 +14,6 @@ class RubyTests(module_framework.AvocadoTest):
self.start()
self.run("which ruby")

def test_version(self):
self.start()
self.run("ruby -v")
def test_hostname(self):
self.start()
self.run("hostname")

if name == 'main':
main()

$ sudo make test

Allow removing modulemd-url from config

I'd like to use MTF to test a docker container not built from module, therefore I'd like to ask you to make modulemd-url optional. Do you think this makes sense?

Code Review: fix pylint/pyflakes errors

Hi, so I was asked to do a detailed code review for MTF, so here's the first issue - code style and statically checked warnings errors.

Since the outputs that I got from running pylint/pyflakes are quite sizable, I'll just comment on some general points. You can install these tools yourselves (pip install --user pylint pyflakes) and run them on the code (pylint moduleframework/, pyflakes moduleframework/) to see what's wrong. So here's some general comments from my code observations and pylint/pyflakes output:

  • When inheriting from a parent class, you should call it's __init__ method in the child's __init__. Occurs in multiple places, e.g. moduleframework.mtf_generator:TestGenerator.__init__ should call super(TestGenerator, self).__init__().
  • pylint complains about method names being camelcase, not snakecase. The general recommendation in Python is using snakecase, but camelcase is also fine as long as you stay consistent throughout the codebase. For example, in moduleframework.setup, there are some methods whose names begin in capital letters, some functions in moduleframework.common and moduleframework.dockerlinter use snakecase etc. I recommend choosing one naming scheme and applying it everywhere. The same applies to variable naming.
  • pylint reports some bad indentation, e.g. in moduleframework.module_framework, line 1400-1404. While Python 2 can generally guess what you meant with e.g. 3 spaces, Python 3 will report this as syntax error, so I'd recommend fixing these.
  • Generally, strings representing regular expressions should be raw strings. A good explanation of why is at the very beginning section of [2].
  • You should use absolute imports, not relative imports. This is mostly important because of Python 3 compatibility. E.g. instead of from compose_info import .., use from moduleframework.compose_info import ... (this appears throughout the codebase).
  • You should not use wildcard imports (from foo import *). These pose a danger of polluting your namespace and overriding methods with generic names that you import from elsewhere. You should always import exactly what you need (this appears throughout the codebase).
  • In Python, keyword argument assignment in function definitions/calls should not have empty spaces around =, so self.__addModuleDependency(url=latesturl, name = dep, ... is wrong, self.__addModuleDependency(url=latesturl, name=dep, .. is correct. This occurs in multiple places. Also, exactly one space is required after comma that separates arguments.
  • moduleframework.module_framework, line 316 has wrong mode for open (rw doesn't exist in Python). Judging from the code, you probably wanted to use just r. Also, the file isn't closed anywhere.
  • pylint reports some "catching too general exception Exception" in quite a lot of places. While this is occasionally ok to do, I'd recommend narrowing it down to a subclass if possible.
  • pylint is reporting some unnecessary pass statements, these can be safely removed.
  • pylint also complains about lines being long. Generally this is a matter of taste, but lately the consensus in Python community has been to use 100 chars per line at maximum.
  • pylint complains about lots of "invalid variable names", mostly for variables that are called x, a, b, etc. foo is not a great name either. While this is sometimes ok, I'd recommend considering using more descriptive names.
  • pylint reports problems with import order. Generally, the imports should be as follows: 1) imports from stdlib, 2) imports from pip-installed dependencies, 3) imports from this module.
  • pylint complains about some redefined top-level variables, but I think these are actually mostly fine to keep.
  • pylint complains about some object attributes being defined outside of __init__, mostly in moduleframework.pdc_data. While this may sometimes make sense, I'd highly recommend initializing them to None or similar values inside __init__.
  • pylint complains about lots of unused arguments. While this may actually be part of making sure API stays stable in the future, it may also be a sign of ignoring important input, so I'd advise investigating these.
  • pylint reports lots of "Wrong hanging indentation" errors. Even though this is not a huge issue, fixing this may improve code readability. Generally speaking, the hanging indentation should follow rules outlined at [1].
  • pylint reports couple of unused variables throughout the codebase, these can most likely be safely removed.
  • pylint reports that some modules and classes miss docstrings, but I don't consider this to be a huge issue. I would however recommend providing docstrings for all methods (some docstrings are missing).

I'd highly recommend registering at some CI and adding runs of pylint/pyflakes to make check in order to keep code quality high.

[1] https://www.python.org/dev/peps/pep-0008/#indentation
[2] https://docs.python.org/2/library/re.html#module-re

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.