Coder Social home page Coder Social logo

lumc / pytest-workflow Goto Github PK

View Code? Open in Web Editor NEW
63.0 7.0 9.0 774 KB

Configure workflow/pipeline tests using yaml files.

Home Page: https://pytest-workflow.readthedocs.io/en/stable/

License: GNU Affero General Public License v3.0

Python 96.32% WDL 2.23% Nextflow 1.45%
testing test workflow pipeline pytest-plugin pytest yaml wdl cromwell snakemake

pytest-workflow's Introduction

pytest-workflow

More information on how to cite pytest-workflow here.

pytest-workflow is a workflow-system agnostic testing framework that aims to make pipeline/workflow testing easy by using YAML files for the test configuration. Whether you write your pipelines in WDL, snakemake, nextflow, bash or any other workflow framework, pytest-workflow makes testing easy. pytest-workflow is build on top of the pytest test framework.

For our complete documentation and examples checkout our readthedocs page.

Installation

Pytest-workflow requires Python 3.7 or higher. It is tested on Python 3.7, 3.8, 3.9, 3.10 and 3.11.

  • Make sure your virtual environment is activated.
  • Install using pip pip install pytest-workflow
  • Create a tests directory in the root of your repository.
  • Create your test yaml files in the tests directory.

Pytest-workflow is also available as a conda package on conda-forge. Follow these instructions to set up channels properly in order to use conda-forge. Alternatively, you can set up the channels correctly for use with bioconda. After that conda install pytest-workflow can be used to install pytest-workflow.

Quickstart

Run pytest from an environment with pytest-workflow installed. Pytest will automatically gather files in the tests directory starting with test and ending in .yaml or .yml.

To check the progress of a workflow while it is running you can use tail -f on the stdout or stderr file of the workflow. The locations of these files are reported in the log as soon as a workflow is started.

For debugging pipelines using the --kwd or --keep-workflow-wd flag is recommended. This will keep the workflow directory and logs after the test run so it is possible to check where the pipeline crashed. The -v flag can come in handy as well as it gives a complete overview of succeeded and failed tests.

Below is an example of a YAML file that defines a test:

- name: Touch a file
  command: touch test.file
  files:
    - path: test.file

This will run touch test.file and check afterwards if a file with path: test.file is present. It will also check if the command has exited with exit code 0, which is the only default test that is run. Testing workflows that exit with another exit code is also possible. Several other predefined tests as well as custom tests are possible.

- name: moo file                     # The name of the workflow (required)
  command: bash moo_workflow.sh      # The command to execute the workflow (required)
  files:                             # A list of files to check (optional)
    - path: "moo.txt"                # File path. (Required for each file)
      contains:                      # A list of strings that should be in the file (optional)
        - "moo"
      must_not_contain:              # A list of strings that should NOT be in the file (optional)
        - "Cock a doodle doo"
      md5sum: e583af1f8b00b53cda87ae9ead880224   # Md5sum of the file (optional)
      encoding: UTF-8                # Encoding for the text file (optional). Defaults to system locale.

- name: simple echo                  # A second workflow. Notice the starting `-` which means
  command: "echo moo"                # that workflow items are in a list. You can add as much workflows as you want
  files:
    - path: "moo.txt"
      should_exist: false            # Whether a file should be there or not. (optional, if not given defaults to true)
  stdout:                            # Options for testing stdout (optional)
    contains:                        # List of strings which should be in stdout (optional)
      - "moo"
    must_not_contain:                # List of strings that should NOT be in stout (optional)
      - "Cock a doodle doo"
    encoding: ASCII                  # Encoding for stdout (optional). Defaults to system locale.

- name: mission impossible           # Also failing workflows can be tested
  tags:                              # A list of tags that can be used to select which test
    - should fail                    # is run with pytest using the `--tag` flag.
  command: bash impossible.sh
  exit_code: 2                       # What the exit code should be (optional, if not given defaults to 0)
  files:
    - path: "fail.log"               # Multiple files can be tested for each workflow
    - path: "TomCruise.txt.gz"       # Gzipped files can also be searched, provided their extension is '.gz'
      contains:
        - "starring"
      extract_md5sum: e27c52f6b5f8152aa3ef58be7bdacc4d   # Md5sum of the uncompressed file (optional)
  stderr:                            # Options for testing stderr (optional)
    contains:                        # A list of strings which should be in stderr (optional)
      - "BSOD error, please contact the IT crowd"
    must_not_contain:                # A list of strings which should NOT be in stderr (optional)
      - "Mission accomplished!"
    encoding: UTF-16                 # Encoding for stderr (optional). Defaults to system locale.

- name: regex tests
  command: echo Hello, world
  stdout:
    contains_regex:                  # A list of regex patterns that should be in stdout (optional)
      - 'Hello.*'                    # Note the single quotes, these are required for complex regexes
      - 'Hello .*'                   # This will fail, since there is a comma after Hello, not a space

    must_not_contain_regex:          # A list of regex patterns that should not be in stdout (optional)
      - '^He.*'                      # This will fail, since the regex matches Hello, world
      - '^Hello .*'                  # Complex regexes will break yaml if double quotes are used

For more information on how Python parses regular expressions, see the Python documentation.

Documentation for more advanced use cases including the custom tests can be found on our readthedocs page.

pytest-workflow's People

Contributors

cedric-m avatar jalvarezjarreta avatar lucasvanb avatar redmar-van-den-berg avatar rhpvorderman avatar sndrtj avatar wholtz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytest-workflow's Issues

Release 0.4.0

Release checklist

  • Check outstanding issues on JIRA and Github
  • Check latest documentation looks fine
  • Create a release branch
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Create a test pypi package from the release branch
  • Merge the release branch into master
  • Created an annotated tag with the stable version number. Include changes from history.rst.
  • Push tag to remote.
  • Create packages and push to pypi
  • merge master branch back into develop.
  • Add updated version number to develop

Release 1.2.2

Release checklist

  • Check outstanding issues on JIRA and Github
  • Check latest documentation
    looks fine
  • Create a release branch
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Merge the release branch into master
  • Create a test pypi package from the master branch. (Instructions.)
  • Install the packages from the test pypi repository to see if they work.
  • Created an annotated tag with the stable version number. Include changes
    from history.rst.
  • Push tag to remote.
  • Push tested packages to pypi
  • merge master branch back into develop.
  • Add updated version number to develop
  • Update the package on conda-forge
  • Build the new tag on readthedocs. Only build the last patch version of
    each minor version. So 1.1.1 and 1.2.0 but not 1.1.0, 1.1.1 and 1.2.0.

Use of environment variables should be clarified in the documentation

I am trying to run a test where I have an environment variable in the command but it seems that it is not expanded or something.

The command (which runs as expected outside pytest-workflow) is:

nextflow run ./tests/test -entry test -profile $NF_PROFILE

And the error is a nextflow one: Unknown configuration profile: '$NF_PROFILE'
It appears to be passing $NF_PROFILE literally? Is this the expected behaviour?

Enable custom tests to run on workflow results.

This is necessary for functional tests.

My design idea is as follows:

  • do not change the YAML (JSON) schema. Allowing for custom tests there is opening pandora's box.
  • The tests should be written in python file just like the way other python tests are written.
  • A workflow_dir fixture should be provided that contains the temporary dir where all workflows are run. This makes it easy to run tests on files created by workflows.
  • A sort of marker/tag function should be provided (preferably using known and documented pytest syntax) that will link a test to a certain workflow name and ensure that the test only is run if this workflow is also run.

In order for this design to work, the workflow directories should only be deleted after all tests in the pytest run are completed.

Latest version does not work due to pytest deprecation

It looks like pytest has deprecated a function that is used by pytest-workflow, which causes a fresh conda install of pytest-workflow 1.5 to fail.

Either with this deprecation error:

=================================================== test session starts ===================================================
platform linux -- Python 3.9.1, pytest-6.1.1, py-1.10.0, pluggy-0.13.1
rootdir: /
plugins: workflow-1.2.0
collecting ... 
collected 0 items / 1 error                                                                                               

========================================================= ERRORS ==========================================================
______________________________________________ ERROR collecting test session ______________________________________________
Direct construction of YamlFile has been deprecated, please use YamlFile.from_parent.
See https://docs.pytest.org/en/stable/deprecations.html#node-construction-changed-to-node-from-parent for more details.
================================================= short test summary info =================================================
ERROR 
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
==================================================== 1 error in 0.48s =====================================================

Or by skipping all tests

test-sanity-snakemake:
	command:   snakemake --version

	directory: /tmp/pytest_workflow_70c8pejs/test-sanity-snakemake
	stdout:    /tmp/pytest_workflow_70c8pejs/test-sanity-snakemake/log.out
	stderr:    /tmp/pytest_workflow_70c8pejs/test-sanity-snakemake/log.err
'test-sanity-snakemake' python error during starting.
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR>   File "/pytest-wf-latest/lib/python3.9/site-packages/_pytest/main.py", line 269, in wrap_session
INTERNALERROR>     session.exitstatus = doit(config, session) or 0
INTERNALERROR>   File "/pytest-wf-latest/lib/python3.9/site-packages/_pytest/main.py", line 323, in _main
INTERNALERROR>     config.hook.pytest_runtestloop(session=session)
INTERNALERROR>   File "/pytest-wf-latest/lib/python3.9/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR>     return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR>   File "/pytest-wf-latest/lib/python3.9/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR>     return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR>   File "/pytest-wf-latest/lib/python3.9/site-packages/pluggy/manager.py", line 84, in <lambda>
INTERNALERROR>     self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
INTERNALERROR>   File "/pytest-wf-latest/lib/python3.9/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR>     return outcome.get_result()
INTERNALERROR>   File "/pytest-wf-latest/lib/python3.9/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR>     raise ex[1].with_traceback(ex[2])
INTERNALERROR>   File "/pytest-wf-latest/lib/python3.9/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR>     res = hook_impl.function(*args)
INTERNALERROR>   File "/pytest-wf-latest/lib/python3.9/site-packages/pytest_workflow/plugin.py", line 259, in pytest_runtestloop
INTERNALERROR>     session.config.workflow_queue.process(  # type: ignore
INTERNALERROR>   File "/pytest-wf-latest/lib/python3.9/site-packages/pytest_workflow/workflow.py", line 196, in process
INTERNALERROR>     raise self._process_errors[0]
INTERNALERROR>   File "/pytest-wf-latest/lib/python3.9/site-packages/pytest_workflow/workflow.py", line 84, in start
INTERNALERROR>     self._popen = subprocess.Popen(  # nosec: Shell is not enabled. # noqa
INTERNALERROR>   File "/pytest-wf-latest/lib/python3.9/subprocess.py", line 947, in __init__
INTERNALERROR>     self._execute_child(args, executable, preexec_fn, close_fds,
INTERNALERROR>   File "/pytest-wf-latest/lib/python3.9/subprocess.py", line 1819, in _execute_child
INTERNALERROR>     raise child_exception_type(errno_num, err_msg, err_filename)
INTERNALERROR> FileNotFoundError: [Errno 2] No such file or directory: 'snakemake'
 Removing temporary directories and logs. Use '--kwd' or '--keep-workflow-wd' to disable this behaviour.
================================================== no tests ran in 0.32s ==================================================

pytest-workflow crashes when git submodules are missing

When using the --git-aware flag, pytest-workflow crashes with a wall of errors if your project has git submodules, but you haven't cloned the submodules yet. It is hard to tell from the error messages what is going on.

In my case, this error message is one of 13 errors, it appears this error is printed once for every test case that is defined.

___________________________________________________________________________________ ERROR collecting tests/test_functional.yml ____________________________________________________________________________________
../../miniconda3/envs/HAMLET/lib/python3.10/site-packages/pytest_workflow/plugin.py:439: in collect
    workflow = self.queue_workflow()
../../miniconda3/envs/HAMLET/lib/python3.10/site-packages/pytest_workflow/plugin.py:397: in queue_workflow
    duplicate_tree(root_dir, tempdir,
../../miniconda3/envs/HAMLET/lib/python3.10/site-packages/pytest_workflow/util.py:161: in duplicate_tree
    copy(src_path, dest_path)
../../miniconda3/envs/HAMLET/lib/python3.10/shutil.py:434: in copy2
    copyfile(src, dst, follow_symlinks=follow_symlinks)
../../miniconda3/envs/HAMLET/lib/python3.10/shutil.py:254: in copyfile
    with open(src, 'rb') as fsrc:
E   IsADirectoryError: [Errno 21] Is a directory: '/home/user/devel/prod-test/hamlet'

Proposed solution: Check to see if all git submodules are present when using --git--aware, and immediately abort with one informative error message if they are missing.

Release 1.4.0

Release checklist

  • Check outstanding issues on JIRA and Github.
  • Check latest documentation looks fine.
  • Create a release branch.
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Merge the release branch into master.
  • Create a test pypi package from the master branch. (Instructions.)
  • Install the packages from the test pypi repository to see if they work.
  • Created an annotated tag with the stable version number. Include changes
    from history.rst.
  • Push tag to remote.
  • Push tested packages to pypi.
  • merge master branch back into develop.
  • Add updated version number to develop.
  • Build the new tag on readthedocs. Only build the last patch version of
    each minor version. So 1.1.1 and 1.2.0 but not 1.1.0, 1.1.1 and 1.2.0.
  • Create a new release on github.
  • Update the package on conda-forge.

Release 1.3.0

Release checklist

  • Check outstanding issues on JIRA and Github.
  • Check latest documentation looks fine.
  • Create a release branch.
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Merge the release branch into master.
  • Create a test pypi package from the master branch. (Instructions.)
  • Install the packages from the test pypi repository to see if they work.
  • Created an annotated tag with the stable version number. Include changes
    from history.rst.
  • Push tag to remote.
  • Push tested packages to pypi.
  • merge master branch back into develop.
  • Add updated version number to develop.
  • Build the new tag on readthedocs. Only build the last patch version of
    each minor version. So 1.1.1 and 1.2.0 but not 1.1.0, 1.1.1 and 1.2.0.
  • Create a new release on github.
  • Update the package on conda-forge.

Running out of disk space

Thanks for the great library. We've been making heavy use of it on one of the WDL pipelines we're developing: https://github.com/ENCODE-DCC/dnase-seq-pipeline

In general we split our tests up into unit, integration, and functional tests, and run each of these groups separately on circleCI. Most of our WDL tasks are thin wrappers over command-line utilities, and we use our unit tests specifically to ensure our tasks are emitting the correct command given some input. (We use our integration/functional tests to actually compare outputs.)

We've recently run into a problem that we've added so many unit tests (https://github.com/ENCODE-DCC/dnase-seq-pipeline/tree/dev/tests/unit) that our circleCI machine is running out of disk space (~100GB). It appears every stdout contains assertion is counted as a separate test and our repo (around 1.2GB because of some test data stored in git-lfs) gets copied to a temp directory over and over:

shutil.copytree(str(self.config.rootdir), str(tempdir))

While this makes sense for how pytest-workflow is supposed to work I'm wondering if there's a way to reduce the disk space footprint, either by using some kind of symlinking or by collecting and running tests in batches instead of all upfront?

This isn't too urgent. Our current workaround is just to add a second unit test tag and split our tests across two circleCI machines. In general though it would be nice to have a scalable solution and only maintain one --tag unit instead of multiple arbitrary groups like --tag unit1, --tag unit2 etc.

Release 1.2.1

Release checklist

  • Check outstanding issues on JIRA and Github
  • Check latest documentation
    looks fine
  • Create a release branch
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Merge the release branch into master
  • Create a test pypi package from the master branch. (Instructions.)
  • Install the packages from the test pypi repository to see if they work.
  • Created an annotated tag with the stable version number. Include changes
    from history.rst.
  • Push tag to remote.
  • Push tested packages to pypi
  • merge master branch back into develop.
  • Add updated version number to develop
  • Update the package on conda-forge
  • Build the new tag on readthedocs. Only build the last patch version of
    each minor version. So 1.1.1 and 1.2.0 but not 1.1.0, 1.1.1 and 1.2.0.

Difficulty using with Nextflow due to the `work` folder.

By default, Nextflow creates a work directory in the root folder. This gets copied by pytest-workflow unless it gets cleared out first. This primarily causes issues because:

  1. It results in (sometimes large) files being copied unnecessarily.
  2. The work folder may sometimes contain broken symlinks, resulting in an error.

Obviously the best solution is to remember to clear it out before running tests, but that's not always convenient.

Looking at the code:

shutil.copytree(str(self.config.rootdir), str(tempdir))

Being able to set ignore_dangling_symlinks to True (or setting that by default) would solve the second problem. Better (but more involved) would be using the ignore parameter. Solving for #78 would also work since this folder is generally included in .gitignore.

Feature request: allow custom tests to run on multiple workflows

In our QC pipeline I have the following test:

@pytest.mark.workflow(name="paired_end_zipped")
def test_paired_end_zipped_after_no_adapters_read_one(workflow_dir):
    fastqc_read_one_before = (
            workflow_dir / Path("test-output") / Path("cutadapt_ct_r1_fastqc")
            / Path("fastqc_data.txt"))
    assert adapters_in_fastqc_data(
        fastqc_read_one_before).get('Illumina Universal Adapter') is True

I would like to run this test for multiple workflows.
A possible syntax would be to list all the workflow names:

@pytest.mark.workflow(name="functional_paired_end_zipped")
@pytest.mark.workflow(name="paired_end")
@pytest.mark.workflow(name="paired_end_zipped")
@pytest.mark.workflow(name="single_end")
@pytest.mark.workflow(name="paired_end_zipped")
def test_paired_end_zipped_after_no_adapters_read_one(workflow_dir):
    ...

Alternatively it could work with tags as well.

@pytest.mark.workflow(tag="integration")
@pytest.mark.workflow(tag="functional")
def test_paired_end_zipped_after_no_adapters_read_one(workflow_dir):
    ...

Or it could work with tags and names (plural):

@pytest.mark.workflow(tags=["integration", functional])
def test_paired_end_zipped_after_no_adapters_read_one(workflow_dir):
@pytest.mark.workflow(names=["functional_paired_end_zipped", "paired_end_zipped" etc.])
def test_paired_end_zipped_after_no_adapters_read_one(workflow_dir):

Add option to be git aware

Pytest-workflow copies the current working directory. This can give problems if the current working directory includes large environment folders.

Also pytest has to be run from the root directory of the workflow.

If pytest-workflow becomes git-aware, it always knows what the root directory is, and will run from there. It can also only copy files that are added to git. This will make copying faster and will make the results more reproducible.

Release 1.2.3

There have been some small documentation fixes and a CLI flag has been fixed. It would be nice to get these into production before the start of the OpenWDL testathon.

Release checklist

  • Check outstanding issues on JIRA and Github.
  • Check latest documentation looks fine.
  • Create a release branch.
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Merge the release branch into master.
  • Create a test pypi package from the master branch. (Instructions.)
  • Install the packages from the test pypi repository to see if they work.
  • Created an annotated tag with the stable version number. Include changes
    from history.rst.
  • Push tag to remote.
  • Push tested packages to pypi.
  • merge master branch back into develop.
  • Add updated version number to develop.
  • Build the new tag on readthedocs. Only build the last patch version of
    each minor version. So 1.1.1 and 1.2.0 but not 1.1.0, 1.1.1 and 1.2.0.
  • Create a new release on github.
  • Update the package on conda-forge.

Release 0.3.0

Release checklist

  • Create a release branch
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Create a test pypi package from the release branch
  • Merge the release branch into master
  • Created an annotated tag with the stable version number. Include changes from history.rst.
  • Push tag to remote.
  • Create packages and push to pypi
  • merge master branch back into develop.
  • Add updated version number to develop

Better naming hierarchy for tests

Output from running pytest verbosely:

(.venv) rhpvorderman@pc:~/projects/pytest-workflow$ pytest -v tests/workflow_tests/test_simple_echo.yml 
============================================================================================= test session starts ==============================================================================================
platform linux -- Python 3.5.3, pytest-4.0.1, py-1.7.0, pluggy-0.8.0 -- /home/rhpvorderman/projects/pytest-workflow/.venv/bin/python
cachedir: .pytest_cache
rootdir: /home/rhpvorderman/projects/pytest-workflow, inifile:
plugins: cov-2.6.0, workflow-0.1.0.dev0
collected 1 item                                                                                                                                                                                               

tests/workflow_tests/test_simple_echo.yml::test_simple_echo.yml::test_simple_echo.yml::test_simple_echo.yml::test_simple_echo.yml. File exists: log.file PASSED 

This is not very nice for an output. Too much of the same name.

Implement proper cleanup of temporary directories.

Currently directories are not cleaned up.
With simple code (i.e. adding a directory cleanup after the yield statement in the WorkflowTestsCollector) the temp directory is cleaned before the tests have run.
A more advanced solution is needed.

Current Directory

Hi,

I have tried using your package to test Cookie Cutter templates. I needed to pass in the current directory to the command for this and found there is no canonical means of doing so.

Using cookiecutter . as the command fails since Cookie Cutter does not recognize "." as a current directory. I thought I might use substitutions but echo "%CD%" binds one to windows and echo "${pwd}" binds one to *nix systems.

I would like to suggest that one use Pythons format string method upon commands for command line substitutions. Something akin to the following would be quite clean since Python code is largely system agnostic.

- name: Echo
  command: echo {root}
  arguments : 
    root: Path.cwd() 

Since Path is not generally accessible the following is probably necessary for execution.

- name: Echo
  command: echo {root}
  arguments : 
    root: from pathlib import Path; Path.cwd() 

Plugin doesn't work nicely with pytest-cov

On a repository with additional regular pytest tests together with the pytest-cov plugin, the coverage no longer makes sense, since pytest-cov also considers the copied sources in /tmp to be sources. E.g.

----------- coverage: platform linux, python 3.6.7-final-0 -----------
Name                                      Stmts   Miss  Cover   Missing
-----------------------------------------------------------------------
src/create_db.py                             62     12    81%   91-93, 160-174
src/db_to_vcf.py                             41      4    90%   114-118
/tmp/pytest_wfl77b19ib/src/create_db.py      62     62     0%   16-174
/tmp/pytest_wfl77b19ib/src/db_to_vcf.py      41     41     0%   16-118
/tmp/pytest_wfwexbmt2j/src/create_db.py      62     62     0%   16-174
/tmp/pytest_wfwexbmt2j/src/db_to_vcf.py      41     41     0%   16-118
-----------------------------------------------------------------------
TOTAL                                       309    222    28%

Running coverage run --source=src -m py.test -v tests does work as expected.

Release 1.1.2

Bugfix release

Release checklist

  • Check outstanding issues on JIRA and Github
  • Check latest documentation looks fine
  • Create a release branch
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Merge the release branch into master
  • Create a test pypi package from the master branch. (Instructions.)
  • Install the packages from the test pypi repository to see if they work.
  • Created an annotated tag with the stable version number. Include changes
    from history.rst.
  • Push tag to remote.
  • Push tested packages to pypi
  • merge master branch back into develop.
  • Add updated version number to develop
  • Update the package on bioconda
  • Build the new tag on readthedocs. Only build the last patch version of
    each minor version. So 1.1.1 and 1.2.0 but not 1.1.0, 1.1.1 and 1.2.0.

Release 1.5.0

Release checklist

  • Check outstanding issues on JIRA and Github.
  • Check latest documentation looks fine.
  • Create a release branch.
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Merge the release branch into master.
  • Create a test pypi package from the master branch. (Instructions.)
  • Install the packages from the test pypi repository to see if they work.
  • Created an annotated tag with the stable version number. Include changes
    from history.rst.
  • Push tag to remote.
  • Push tested packages to pypi.
  • merge master branch back into develop.
  • Add updated version number to develop.
  • Build the new tag on readthedocs. Only build the last patch version of
    each minor version. So 1.1.1 and 1.2.0 but not 1.1.0, 1.1.1 and 1.2.0.
  • Create a new release on github.
  • Update the package on conda-forge.

Improve cooperation with Travis

Travis fails if no output was received for ten minutes.
When running pipelines that depend on containers, these containers need to be pulled, which takes time and will lead to a timeout all the time receiving no output.

A "still running" message that displays when pipelines are running every minute that can be enabled with --travis or something could fix this.

Infinite loop when using --basetemp

When using --basetemp, an infinite loop occurs when the directory pointed to --basetemp resides in the same directory as pytest is invoked in.

E.g. the following will create an infinite loop of infinitely nested directories.

mkdir a
cd a
touch test-blah.yml
mkdir b
py.test --basetemp ${PWD}/b 

Avoid capturing stdout and stderr

Hi! I love this package. Is there any possibility that it could avoid capturing command stdout and stderr? It would make it much easier to see if something went wrong with a command when run in CI.

pytest gives an error while the --basedir is not inside the root dir

If the --basetemp folder is /path/to/folder-where-I-run-tests and the the current directory is /path/to/folder, pytest gives an error because it uses string matching to see if the test base folder is inside the current directory. (See

if str(workflow_temp_dir.absolute()).startswith(str(rootdir.absolute())):
)

INTERNALERROR> ValueError: '/path/to/folder-where-I-run-tests' is a subdirectory of '/path/to/folder'. Please select a --basetemp that is not in pytest's current working directory.

Release 1.6.0

Release checklist

  • Check outstanding issues on JIRA and Github.
  • Check latest documentation looks fine.
  • Create a release branch.
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Merge the release branch into master.
  • Create a test pypi package from the master branch. (Instructions.)
  • Install the packages from the test pypi repository to see if they work.
  • Created an annotated tag with the stable version number. Include changes
    from history.rst.
  • Push tag to remote.
  • Push tested packages to pypi.
  • merge master branch back into develop.
  • Add updated version number to develop.
  • Build the new tag on readthedocs. Only build the last patch version of
    each minor version. So 1.1.1 and 1.2.0 but not 1.1.0, 1.1.1 and 1.2.0.
  • Create a new release on github.
  • Update the package on conda-forge.

Allow changing the temporary directory in which workflows are run.

This is necessary for several reasons:

  • The /tmp directory might not be big enough to hold all the files. This may lead to problems on some systems
  • When testing on clusters with a shared filesystem /tmp is usually not on the sharedfilesystem and therefore not accessible for nodes on the cluster.

Deprecation warnings due to pytest 5.4.0.0

Pytest deprecated the way collector nodes should be instantiated. This gives a lot of deprecation warnings now on fresh installs of pytest-workflow.
Pytest-workflow should be refactored to work with this change.

Document pytest ini

Workflow specific pytest-workflow settings can be put in a pytest.ini file at the root. This is very useful for settings such as '--git-aware'.

This should be in the documentation.

Release 1.0.0

Release checklist

  • Check outstanding issues on JIRA and Github
  • Check latest documentation
    looks fine
  • Create a release branch
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Create a test pypi package from the release branch. (Instructions.)
  • Merge the release branch into master
  • Created an annotated tag with the stable version number. Include changes
    from history.rst.
  • Push tag to remote.
  • Create packages and push to pypi
  • merge master branch back into develop.
  • Add updated version number to develop

Release 1.2.0

Release checklist

  • Check outstanding issues on JIRA and Github
  • Check latest documentation
    looks fine
  • Create a release branch
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Merge the release branch into master
  • Create a test pypi package from the master branch. (Instructions.)
  • Install the packages from the test pypi repository to see if they work.
  • Created an annotated tag with the stable version number. Include changes
    from history.rst.
  • Push tag to remote.
  • Push tested packages to pypi
  • merge master branch back into develop.
  • Add updated version number to develop
  • Update the package on bioconda
  • Build the new tag on readthedocs. Only build the last patch version of
    each minor version. So 1.1.1 and 1.2.0 but not 1.1.0, 1.1.1 and 1.2.0.

Release 1.1.1

Release checklist

  • Check outstanding issues on JIRA and Github
  • Check latest documentation
    looks fine
  • Create a release branch
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Merge the release branch into master
  • Create a test pypi package from the master branch. (Instructions.)
  • Install the packages from the test pypi repository to see if they work.
  • Created an annotated tag with the stable version number. Include changes
    from history.rst.
  • Push tag to remote.
  • Push tested packages to pypi
  • merge master branch back into develop.
  • Add updated version number to develop
  • Update the package on bioconda
  • Build the new tag on readthedocs. Only build the last patch version of
    each minor version. So 1.1.1 and 1.2.0 but not 1.1.0, 1.1.1 and 1.2.0.

Add decorators in workflow

Instead of adding self.wait() everywhere. We could use decorators in the Workflow class as suggested by @sndrtj in #31 . This will make the code more maintainable.

Add `contains_regex` and `must_not_contain_regex` keys to test arsenal.

Pytest-workflow supports exact file matching (with md5sums) and exact string matching in files. However this does not cover all test cases. Therefore contains_regex and must_not_contain_regex can be added. These are nice intermediates between declaring exact string matching in the YAML file and using a custom python test.

Release 0.2.0

Release checklist

  • Create a release branch
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Create a test pypi package from the release branch
  • Merge the release branch into master
  • Created an annotated tag with the stable version number. Include changes from history.rst.
  • Push tag to remote.
  • Create packages and push to pypi
  • merge master branch back into develop.
  • Add updated version number to develop

Release 1.1.0

Release checklist

  • Check outstanding issues on JIRA and Github
  • Check latest documentation
    looks fine
  • Create a release branch
    • Set version to a stable number.
    • Change current development version in HISTORY.rst to stable version.
  • Create a test pypi package from the release branch. (Instructions.)
  • Merge the release branch into master
  • Created an annotated tag with the stable version number. Include changes
    from history.rst.
  • Push tag to remote.
  • Create packages and push to pypi
  • merge master branch back into develop.
  • Add updated version number to develop

pytest-workflow crashes when files in git are removed

pytest-workflow crashes when it is run with --git-aware when files that are added in git are removed from the filesystem:

../../miniconda3/envs/freebayes-snakemake/lib/python3.10/site-packages/pytest_workflow/plugin.py:439: in collect
    workflow = self.queue_workflow()
../../miniconda3/envs/freebayes-snakemake/lib/python3.10/site-packages/pytest_workflow/plugin.py:397: in queue_workflow
    duplicate_tree(root_dir, tempdir,
../../miniconda3/envs/freebayes-snakemake/lib/python3.10/site-packages/pytest_workflow/util.py:161: in duplicate_tree
    copy(src_path, dest_path)
../../miniconda3/envs/freebayes-snakemake/lib/python3.10/shutil.py:434: in copy2
    copyfile(src, dst, follow_symlinks=follow_symlinks)
../../miniconda3/envs/freebayes-snakemake/lib/python3.10/shutil.py:254: in copyfile
    with open(src, 'rb') as fsrc:
E   FileNotFoundError: [Errno 2] No such file or directory: 'freebayes-snakemake/tests/data/sample1_R1.fastq.gz'

I think it would probably be fine to simply skip the copying of "git" files if they are missing, and let the user figure out what is going on. Most likely, they removed those files just moments before, so the fact that they are not copied should not surprise them.

Removal of 'py' dependencies

Due to using some of pytest's internals there is some stuff we use from pytest's py library.

py is not very nice. It avoids the stdlib (by not using tempfile for example) and has implemented a lot of its own methods.

I would prefer to use stdlib methods wherever possible, to make our code more readable and maintainable. I am thinking specifically of the temporary directory for worklows. This is now implemented using pytest's methods. This was done in the hope that pytest would handle cleanup automatically. Since we have realized cleanup explicitly, the use of pytest's internal methods is no longer necessary and we can use shutil.copytree, shutil.rmtree and tempfile.mkdtemp instead.

Launching multiple workflows simultaneously

Pytest-workflow now runs the workflows one at a time. This can be very slow.
If more compute power is available (for example in a cluster environment), there should be an option to increase the number of workflows spwaned.

Working Directory: Windows Permissions

Hi,

I have been using your package upon a Windows machine and it sets up a temporary working directory under the users C:\Users\USERNAME\AppData\Local\Temp directory. Once ones test has completed however and your package tries to remove the temporary directory one finds that the permissions are incorrect and the directory cannot be removed.

I was wondering if there was a configuration setting to alter the location of the temporary working directory ? Otherwise if the fault lies with ones system if you knew of a way to escalate your packages permissions such that it could remove the temporary folder ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.