python-tap / pytest-tap Goto Github PK
View Code? Open in Web Editor NEWTest Anything Protocol (TAP) reporting plugin for pytest
Home Page: http://tappy.readthedocs.io/en/latest/
License: BSD 2-Clause "Simplified" License
Test Anything Protocol (TAP) reporting plugin for pytest
Home Page: http://tappy.readthedocs.io/en/latest/
License: BSD 2-Clause "Simplified" License
Hi there,
Thanks for your work.
I get the following error when I'm running pytest --tap-stream
:
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/usr/local/lib/python3.6/site-packages/_pytest/main.py", line 180, in wrap_session
INTERNALERROR> config._do_configure()
INTERNALERROR> File "/usr/local/lib/python3.6/site-packages/_pytest/config/__init__.py", line 617, in _do_configure
INTERNALERROR> self.hook.pytest_configure.call_historic(kwargs=dict(config=self))
INTERNALERROR> File "/usr/local/lib/python3.6/site-packages/pluggy/hooks.py", line 306, in call_historic
INTERNALERROR> res = self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/usr/local/lib/python3.6/site-packages/pluggy/manager.py", line 67, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/usr/local/lib/python3.6/site-packages/pluggy/manager.py", line 61, in <lambda>
INTERNALERROR> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR> File "/usr/local/lib/python3.6/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/usr/local/lib/python3.6/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/usr/local/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/usr/local/lib/python3.6/site-packages/pytest_sugar.py", line 171, in pytest_configure
INTERNALERROR> sugar_reporter = SugarTerminalReporter(standard_reporter)
INTERNALERROR> File "/usr/local/lib/python3.6/site-packages/pytest_sugar.py", line 208, in __init__
INTERNALERROR> TerminalReporter.__init__(self, reporter.config)
INTERNALERROR> AttributeError: 'NoneType' object has no attribute 'config'
1..0
Any idea how this can be resolved?
Thanks
I attempted to write a wrapper that always passes --tap-stream to the underlying pytest call and this appears to break -h
$ pytest --tap-stream -h
1..0
Traceback (most recent call last):
File "/home/leonard/.local/bin/pytest", line 8, in <module>
sys.exit(console_main())
File "/usr/local/lib/python3.8/dist-packages/_pytest/config/__init__.py", line 185, in console_main
code = main()
File "/usr/local/lib/python3.8/dist-packages/_pytest/config/__init__.py", line 162, in main
ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
File "/usr/local/lib/python3.8/dist-packages/pluggy/hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/usr/local/lib/python3.8/dist-packages/pluggy/manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/usr/local/lib/python3.8/dist-packages/pluggy/manager.py", line 84, in <lambda>
self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
File "/usr/local/lib/python3.8/dist-packages/pluggy/callers.py", line 208, in _multicall
return outcome.get_result()
File "/usr/local/lib/python3.8/dist-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/usr/local/lib/python3.8/dist-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/usr/local/lib/python3.8/dist-packages/_pytest/helpconfig.py", line 149, in pytest_cmdline_main
showhelp(config)
File "/usr/local/lib/python3.8/dist-packages/_pytest/helpconfig.py", line 159, in showhelp
tw = reporter._tw
AttributeError: 'NoneType' object has no attribute '_tw'
The _ ERROR at setup of test case (unittest.TestCase) but run with pytest and using pytest fixture are not present inside the tap file although are present in log of pytest; the one which resulted OK are in TAP file. Whe tests fails it is also present in TAP, but not when ther is any kind of error.
I would expect errors are in tap file too. Is it correct assumption? If yest let me know and I try to provide minimal example to reproduce.
============================= test session starts ==============================
platform linux -- Python 3.5.2, pytest-5.0.1, py-1.8.0, pluggy-0.12.0 -- /nzutils/env/bin/python3
cachedir: .pytest_cache
rootdir: /nzutils/application/columbus, inifile: setup.cfg
plugins: cov-2.7.1, flake8-1.0.4, flask-0.15.0, forked-1.0.2, mccabe-1.0, openfiles-0.4.0, tap-2.2, xdist-1.29.0
collecting ... collected 36 items
nzutils/application/columbus/tests/test_api.py::test_api_get_list_users PASSED [ 2%]
nzutils/application/columbus/tests/test_api.py::test_api_get_list_groups PASSED [ 5%]
nzutils/application/columbus/tests/test_api.py::test_get_metadata_by_non_existing_identifier PASSED [ 8%]
nzutils/application/columbus/tests/test_api.py::test_get_metadata_by_non_existing_table_path PASSED [ 11%]
nzutils/application/columbus/tests/test_api.py::test_get_metadata_by_existing_identifier PASSED [ 13%]
nzutils/application/columbus/tests/test_app.py::test_env PASSED [ 16%]
nzutils/application/columbus/tests/test_app.py::test_app PASSED [ 19%]
nzutils/application/columbus/tests/test_entity_groups.py::TestEntityGroups::testCountColumn PASSED [ 22%]
nzutils/application/columbus/tests/test_entity_groups.py::TestEntityGroups::testDeleted PASSED [ 25%]
nzutils/application/columbus/tests/test_entity_groups.py::TestEntityGroups::testLatestIngested PASSED [ 27%]
nzutils/application/columbus/tests/test_entity_groups.py::TestEntityGroups::testRowStats PASSED [ 30%]
nzutils/application/columbus/tests/test_index.py::test_homepage PASSED [ 33%]
nzutils/application/columbus/tests/test_index.py::test_users_list PASSED [ 36%]
nzutils/application/columbus/tests/test_index.py::test_users_userinfo PASSED [ 38%]
nzutils/application/columbus/tests/test_index.py::test_entities_list_ordering_by_table_name PASSED [ 41%]
nzutils/application/columbus/tests/test_index.py::test_entities_list_ordering_by_table_schema PASSED [ 44%]
nzutils/application/columbus/tests/test_index.py::test_ordering_on_each_of_entity_view_list_columns_works PASSED [ 47%]
nzutils/application/columbus/tests/test_index.py::test_user_can_not_edit_entity_with_restricted_access PASSED [ 50%]
nzutils/application/columbus/tests/test_index.py::test_only_the_owner_can_edit_entity_with_restricted_access PASSED [ 52%]
nzutils/application/columbus/tests/test_index.py::test_user_with_user_access_can_edit_entity PASSED [ 55%]
nzutils/application/columbus/tests/test_index.py::test_user_without_required_group_membership_can_not_edit_entity PASSED [ 58%]
nzutils/application/columbus/tests/test_index.py::test_user_with_group_access_can_edit_entity PASSED [ 61%]
nzutils/application/columbus/tests/test_index.py::test_user_with_group_membership_can_edit_entity PASSED [ 63%]
nzutils/application/columbus/tests/test_models.py::test_env PASSED [ 66%]
nzutils/application/columbus/tests/test_models.py::test_app PASSED [ 69%]
nzutils/application/columbus/tests/test_models.py::test_extraction_in_display_table_name_is_correct PASSED [ 72%]
nzutils/application/columbus/tests/test_models.py::test_sorting_by_hybrid_property_display_table_name PASSED [ 75%]
nzutils/application/columbus/tests/test_models.py::UserGroupsTest::test_get_user_groups_for_newly_created_user PASSED [ 77%]
nzutils/application/columbus/tests/test_models.py::UserGroupsTest::test_get_user_groups_for_super_user PASSED [ 80%]
nzutils/application/columbus/tests/test_models.py::UserGroupsTest::test_get_user_groups_non_existing_user PASSED [ 83%]
nzutils/application/columbus/tests/test_models.py::UserGroupsTest::test_is_superuser PASSED [ 86%]
nzutils/application/columbus/tests/test_ui.py::TestEntityFilteringAndSearchingUI::test_all_filter_options_rendered ERROR [ 88%]
nzutils/application/columbus/tests/test_ui.py::TestEntityFilteringAndSearchingUI::test_filtering_by_rows ERROR [ 91%]
nzutils/application/columbus/tests/test_ui.py::TestEntityFilteringAndSearchingUI::test_filters_combined_with_free_text_search ERROR [ 94%]
nzutils/application/columbus/tests/test_ui.py::TestEntityFilteringAndSearchingUI::test_free_text_search_works ERROR [ 97%]
nzutils/application/columbus/tests/test_ui.py::TestEntityFilteringAndSearchingUI::test_perform_basic_ui_check ERROR [100%]```
Our CI infra likes to have its plan up front so that it can print a nice progress bar. Currently, when combining pytest-xdist and pytest-tap everything works nicely, except that the plan comes at the end. That's valid TAP, but not so nice for us.
It's possible to use the pytest_xdist_node_collection_finished()
hook to write the test plan up front.
Hi.
I was curious if you have considered submitting this to pytest-dev. Just curious. I feel like people could really benefit from this project being hosted there.
When the --tap-stream
option is used, the plugin is also generating a file. It should not behave this way. No files should be generated.
I have a test suite which looks roughly like
1..1234
ok 1 foo
...
ok 900 bar
The first line references a larger amount of tests than are actually in the output, which my TAP consumer doesn't like and treats as an error. I haven't looked into creating something reproducible yet, but I thought I'd file it anyway for now.
I stumbled on pytest-tap through PyPI and I wanted to get started trying it out. I went to https://pypi.org/project/pytest-tap, but it didn't provide any guidance on how to use it. I'd installed it into my test suite, but nothing happened. I tried passing --tap
(following the convention of so many other pytest plugins that required --{name}
to be passed to enable the pytest-{name}
plugin. So I followed the "Developer documentation" link, but that went to the tappy documentation, which didn't provide any guidance on how to use it. I was eventually able to find the README which provided the necessary command line parameter, --tap-stream
.
I propose that the long_description should contain the README or at least link to it as the primary documentation and not present tappy docs as their own.
import unittest
class Test(unittest.TestCase):
@unittest.expectedFailure
def test_should_fail(self):
assert 0
> python3 -m pytest test.py --tap-stream
1..1
ok 1 test.py::Test.test_should_fail # TODO expected failure: reason:
I would have expected it to write out "not ok", since the the TAP spec says for TODO: "They are not expected to succeed".
The problem is that I have a TAP consumer which complains that this test succeeded unexpectedly.
Am I missing something?
Hello,
Is it possible to get the relative path to the files in the output to the --tap-combined file?
Currently it only produces the test name while it skips the path to the file the test function is running in.
I'd like an option to add the path to the test file in which the function is run. as the pytest module provides.
blocks/test_stuff2.py::stuff[xml_data0] SKIPPED
blocks/test_stuff.py::test_stuff PASSED
How the tapfile provides the output:
# TAP results for stuff
ok 1 - test_stuff
# TAP results for test_stuff2
ok 2 - test_stuff2 # SKIP not implemented
....
Suggested behaviour:
# TAP results for stuff
ok 1 - blocks/test_stuff.py::test_stuff
# TAP results for test_stuff2
ok 2 - blocks/test_stuff2.py::test_stuff2 # SKIP not implemented
....
Id be happy to help if you were to consider this option!
When I try installing via nix. I get the following issue:
Error: Could not find a version that satisfies the requirement tap.py>=2.5 (from pytest-tap=2.3).
Any thoughts on what this means? I am on python3.7
Hi!
We're using pytest==3.1.0
and pytest-tap==2.0
and are seeing the following output:
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.6/site-packages/_pytest/main.py", line 101, in wrap_session
config._do_configure()
File "/app/.heroku/python/lib/python3.6/site-packages/_pytest/config.py", line 906, in _do_configure
self.hook.pytest_configure.call_historic(kwargs=dict(config=self))
File "/app/.heroku/python/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 750, in call_historic
self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
File "/app/.heroku/python/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 339, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/app/.heroku/python/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 334, in <lambda>
_MultiCall(methods, kwargs, hook.spec_opts).execute()
File "/app/.heroku/python/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 614, in execute
res = hook_impl.function(*args)
File "/app/.heroku/python/lib/python3.6/site-packages/pytest_tap/plugin.py", line 45, in pytest_configure
config.pluginmanager.unregister(reporter)
File "/app/.heroku/python/lib/python3.6/site-packages/_pytest/vendored_packages/pluggy.py", line 393, in unregister
assert plugin is not None, "one of name or plugin needs to be specified"
AssertionError: one of name or plugin needs to be specified
Has anything changed in the latest versions of pytest which would break pytest-tap? Is this another kind of incompatibility?
Thanks in advance!
Greg
It would be nice if the streaming reporter (--tap-stream
option) printed the 1..N
line up front (instead of at the end, like now).
Using the pytest hook pytest_runtestloop
would make this possible (but I guess it would also take a PR in tappy
repo as well since that's what prints out the 1..N
string. If you think this is reasonable, I can submit PRs.
Once tappy 2.0 is out, bump to 2.0 so that the implicit nose dependency is removed.
Current documentation shows this output
# TAP results for TestParser
⋮
ok 8 - The parser extracts an ok line.
⋮
1..10
Note the test description, which comes from
class TestParser(unittest.TestCase):
"""Tests for tap.parser.Parser"""
def test_finds_ok(self):
"""The parser extracts an ok line."""
parser = Parser()
line = parser.parse_line("ok - This is a passing test line.")
self.assertEqual("test", line.category)
self.assertTrue(line.ok)
self.assertTrue(line.number is None)
⋮
However the output currently produced by the pytest
plugin looks like
$ py.test --tap-stream tap
1..110
⋮
ok 41 tap/tests/test_parser.py::TestParser.test_finds_ok
⋮
Provided I'm not making any mistakes in my assessment, it looks like this functionality was dropped. Is there a way to provide a sensible TAP description?
Thanks!
I really like the plugin so far.
While debugging some tests... I found one that was long running....
BUT! I had to look at code to do so, and I would rather my test feedback tell me about it.
currently my test output is like..
ok 1 apps/my_app/tests/some_test.py::my_test_name
and I would like to have the option of....
1 apps/my_app/tests/some_test.py::my_test_name ... ok
in other words
<test_num> <test_identifier(path)> <has_begun_run> <status>
instead of
<status> <test_num> <test_identifier(path)>
That way I could know which one was running and recognize hanging more easily.
Any thoughts?
thanks much!
Drop 3.4 and add 3.7.
As of tap version 13 YAML blocks are supported. Tappy allows for reading these YAML blocks but it would be great if the pytest tap plugin could write the YAML blocks based on some information coming from the tests themselves.
I implemented this locally using a decorator for the tests and by adding an extra hook method to the pytest tap plugin.py.
The decorator takes a string which represents the body of the YAML block, which should be added to the tap report, and adds it to the TestCase in question.
The hook method then takes this additional information and adds it to the TAP line to be written.
In this way the YAML block is defined by the test writer - it might also be nice to be able to fill the YAML block with some of the other automated outputs of tests.
I've got some code I can share but new to this area so not sure if it's the right way to do it. Links very closely to an issue I just submitted here
The handling of skipped tests makes the assumption that 'longrepr' is a tuple, which is not the case for xfail
ed tests.
Error:
INTERNALERROR> File "...../pytest_tap/plugin.py", line 62, in pytest_runtest_logreport
INTERNALERROR> reason = report.longrepr[2].split(':', 1)[1].strip()
INTERNALERROR> AttributeError: ReprExceptionInfo instance has no attribute '__getitem__'
Offending code:
def pytest_runtest_logreport(report):
...
elif report.outcome == 'skipped':
reason = report.longrepr[2].split(':', 1)[1].strip()
Example of a test that crashes:
import pytest
@pytest.mark.xfail()
def test_fail():
assert 1 == 2
Not sure if this is by design or a bug.. but it would be helpful to output the errors somehow if the test errors out during the test collection phase.
We recently were experiencing a test exiting when a test script was importing some code that relied on a database being there in order to execute. The import was at the top of the file and should have been inside a test method.
The output from the tap stream was:
1..0
Running the test without the --tap-stream
argument showed what the error was:
-------- ERROR collecting tests.py --------
tests.py:16: in <module>
from my_module import MyClass
...
E ProgrammingError: relation “x” does not exist
E LINE 1: ...ion”, “x”.“y” FROM “x...
E ^
!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!
This coverage was 100% with tappy but it seems to be a regression when moved to a separate package. Fix it up.
In pytest, a 'strict' xfail test is one that is expected to fail, and when it passes, that is considered to be a test failure.
I have the following test
import pytest
@pytest.mark.xfail(strict=True)
def test_fail():
# This errors, which means the test passes
assert 1 == 2
which reports:
ok 1 - test_fail # SKIP
1..1
The 'ok' part is correct, but the 'SKIP' is incorrect, since it did in fact run the test. (This can confuse the user, especially when obfuscated by a TAP driver)
Interestingly, the test
import pytest
@pytest.mark.xfail(strict=True)
def test_fail():
# This test case doesn't error, which means the test should fail
assert 1 == 1
reports:
not ok 1 - test_fail
# [XPASS(strict)]
1..1
Which is correct and doesn't have the above problem. However, when 'strict=False', it still fails, which isn't correct (the strict flag seems to be ignored in all cases).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.