hrcorval / behavex Goto Github PK
View Code? Open in Web Editor NEWBDD test wrapper on top of Behave for parallel test executions and more!
Home Page: https://github.com/hrcorval/behavex
License: MIT License
BDD test wrapper on top of Behave for parallel test executions and more!
Home Page: https://github.com/hrcorval/behavex
License: MIT License
Is your feature request related to a problem? Please describe.
Nowadays Allure report is quite a standard for automated tests reports (https://github.com/allure-framework).
To run E2E tests with behave and Allure report, I have to use the following command
behave --format allure --outfile allure-report
Describe the solution you'd like
Is it possible to add -f, --format
and -o, --outfile
arguments to be supported by behavex?
Describe alternatives you've considered
Currently the only alternative with behavex is to use Junit reports which are quite ugly and doesn't support screenshots attaching
Describe the bug
I have a docker image build that installs behavex and other dependencies using poetry. Occasionally behavex fails to install because the htmlmin dependency that behavex depends on will fail to install. This is an intermittent problem that I'm not sure how to workaround it.
I've put the poetry stacktrace at the end.
To Reproduce
Run poetry install
when using Poetry to manage behavex
as a dependency.
Expected behavior
Installation of behavex
to succeed.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
docker.io/python:3.10.8-slim
Additional context
• Installing yarl (1.8.2)
CalledProcessError
Command '['/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--prefix', '/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10', '--no-deps', '/root/.cache/pypoetry/artifacts/45/0d/04/012fb55a6ef90f9e62582c231e7c4564e32686eb54811f499cfdccfce5/htmlmin-0.1.12.tar.gz']' returned non-zero exit status 2.
at /usr/local/lib/python3.10/subprocess.py:526 in run
522│ # We don't call process.wait() as .__exit__ does that for us.
523│ raise
524│ retcode = process.poll()
525│ if check and retcode:
→ 526│ raise CalledProcessError(retcode, process.args,
527│ output=stdout, stderr=stderr)
528│ return CompletedProcess(process.args, retcode, stdout, stderr)
529│
530│
The following error occurred when trying to handle this error:
EnvCommandError
Command ['/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--prefix', '/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10', '--no-deps', '/root/.cache/pypoetry/artifacts/45/0d/04/012fb55a6ef90f9e62582c231e7c4564e32686eb54811f499cfdccfce5/htmlmin-0.1.12.tar.gz'] errored with the following return code 2, and output:
Processing /root/.cache/pypoetry/artifacts/45/0d/04/012fb55a6ef90f9e62582c231e7c4564e32686eb54811f499cfdccfce5/htmlmin-0.1.12.tar.gz
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
ERROR: Exception:
Traceback (most recent call last):
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 160, in exc_logging_wrapper
status = run_func(*args)
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/cli/req_command.py", line 247, in wrapper
return func(self, options, args)
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/commands/install.py", line 419, in run
requirement_set = resolver.resolve(
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 73, in resolve
collected = self.factory.collect_root_requirements(root_reqs)
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 491, in collect_root_requirements
req = self._make_requirement_from_install_req(
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 453, in _make_requirement_from_install_req
cand = self._make_candidate_from_link(
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 206, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 297, in __init__
super().__init__(
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 162, in __init__
self.dist = self._prepare()
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 231, in _prepare
dist = self._prepare_distribution()
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 308, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/operations/prepare.py", line 491, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/operations/prepare.py", line 577, in _prepare_linked_requirement
dist = _get_prepared_distribution(
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/operations/prepare.py", line 69, in _get_prepared_distribution
abstract_dist.prepare_distribution_metadata(
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/distributions/sdist.py", line 61, in prepare_distribution_metadata
self.req.prepare_metadata()
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/req/req_install.py", line 539, in prepare_metadata
self.metadata_directory = generate_metadata(
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/operations/build/metadata.py", line 35, in generate_metadata
distinfo_dir = backend.prepare_metadata_for_build_wheel(metadata_dir)
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/utils/misc.py", line 722, in prepare_metadata_for_build_wheel
return super().prepare_metadata_for_build_wheel(
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 186, in prepare_metadata_for_build_wheel
return self._call_hook('prepare_metadata_for_build_wheel', {
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 321, in _call_hook
raise BackendUnavailable(data.get('traceback', ''))
pip._vendor.pyproject_hooks._impl.BackendUnavailable: Traceback (most recent call last):
File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend
obj = import_module(mod_path)
File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/tmp/pip-build-env-69a8sgxm/overlay/lib/python3.10/site-packages/setuptools/__init__.py", line 8, in <module>
import _distutils_hack.override # noqa: F401
ModuleNotFoundError: No module named '_distutils_hack.override'
at /usr/local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run
1472│ output = subprocess.check_output(
1473│ command, stderr=subprocess.STDOUT, env=env, **kwargs
1474│ )
1475│ except CalledProcessError as e:
→ 1476│ raise EnvCommandError(e, input=input_)
1477│
1478│ return decode(output)
1479│
1480│ def execute(self, bin: str, *args: str, **kwargs: Any) -> int:
The following error occurred when trying to handle this error:
PoetryException
Failed to install /root/.cache/pypoetry/artifacts/45/0d/04/012fb55a6ef90f9e62582c231e7c4564e32686eb54811f499cfdccfce5/htmlmin-0.1.12.tar.gz
at /usr/local/lib/python3.10/site-packages/poetry/utils/pip.py:51 in pip_install
47│
48│ try:
49│ return environment.run_pip(*args)
50│ except EnvCommandError as e:
→ 51│ raise PoetryException(f"Failed to install {path.as_posix()}") from e
52│
Currently my testcase folder structure is and this is working fine
features
test1
testcase1.feature
test2
testcase2.feature
environment.py
pages
loginpage.py
homepage.py
Now i have rearranged to the below folder structure but this is not working, please let me know if we have the option to rearrange folder structure as per developer wish
test
features
test1
testcase1.feature
test2
testcase2.feature
environment.py
pages
loginpage.py
homepage.py
Once installed, the BehaveX wrapper should be executed by calling the executable "behavex" command, provided into the python scripts path.
However, we should also be able to execute the wrapper by just calling the main method, in the following way:
python -m behavex -t <tag>
Describe the bug
I ran behavex -t @mytag --log-capture --parallel-processes 20 --parallel-scheme feature -o AllureReports
i had about 45 features with tag as @mtTag, when it was done executing, it failed 7 out of 45. i executed 7 failed individually it passes.
Also it takes a longer time to generate the reports. Any suggestion ?
I have specified a path where my features are in behave.ini
Behavex ignores this. Id like to give Behavex a directory or path where my features reside
I'm trying to use Behavex to concurrently execute several different feature files. To identify the feature files I want run, I've used tags. However, whenever I try to run Behavex it almost instantly exits (I don't even reach my before_all)
This is the command I am using: behavex -texample -ttest --parallel-processes 2 --prarallel-scheme feature
Where I have two separate feature files tagged with @example and @test respectively.
The problem only occurs when I specify more than 1 tag in the command.
E.g. behavex -texample --parallel-processes 2 --prarallel-scheme feature
runs fine for me.
I'm using Windows 10 with Python 3.10.0, behave 1.2.6 and behavex 1.6.0
Describe the bug
Scenario versus ScenarioOutline detection logic in behave/runner.py
is extremely brittle
because it relies on English keyword "Scenario".
This solution will not work in other languages except in English.
In addition, by now Gherkin supports multiple aliases for Scenario (even in English language).
Therefore, it is better to check is-instance-of(Scenario)
instead of checking the keyword, like:
# -- FILE: behavex/runner.py
from behave.model import ScenarioOutline # ADDED
...
def create_scenario_line_references(features):
...
for feature in features:
...
for scenario in feature.scenarios:
# ORIG: if scenario.keyword == u'Scenario': # -- PROBLEM-POINT was here
if not isinstance(scenario, ScenarioOutline): # NEW_SOLUTION_HERE
feature_lines[scenario.name] = scenario.line
else:
...
To Reproduce
Steps to reproduce the behavior:
behavex features
against the behave/tools/test-features/french.feature
(using keyword: Scénario
(with accept) File "/.../behavex/runner.py", line 2xx, in create_scenario_line_references
for scenario_multiline in scenario.scenarios:
AttributeError: 'Scenario' object has no attribute 'scenarios'
Expected behavior
Scenario detection logic should be independent of keywords and should work for any language.
Version Info:
Describe the bug
The runner tries to run feature-files that where not provided as command-line args (but only reside in the the same directory).
RELATED TO: #80
To Reproduce
Steps to reproduce the behavior:
0. Instrument utils.should_feature_be_run()
by providing a print statement
def should_feature_be_run(path_feature):
print("XXX should_feature_be_run: %s" % path_feature)
... # ORIGINAL CODE is here.
behave features/runner.*.feature
(from: behave repo)XXX: ...
like:XXX: .../features/runner.<SOMETHING>.feature
(OK)XXX: .../features/step_dialect.generic_steps.feature
(not containing features/runner.
)Expected behavior
Try to only files/paths that were provided as command-line args.
Otherwise, you are wasting CPU unnecessarily.
Version Info:
Additional context
Add any other context about the problem here.
My Behave framework uses Python 3.10 and behave 1.2.6
I have UI test in 1 feature file with 10 scenarios.
The command I run is:
behavex -t @{my_tag} --parallel-processes 10 --parallel-scheme scenario --no_capture -D browser=chrome -D vault_key={vault_key} -D project_path={project_path}
But still the scenarios are executed sequentially and not concurrently.
1 Chrome browser window opens at a time
Thanks @hrcorval for developing this amazing module for behave.
Was needing the parallel execution feature in behave since ages.
I tried implementing behavex into a framework which might be of some help...let me know with your thoughts!
https://github.com/salunkhe-ravi/behavex-boilerplate-framework
Please add some documentation (FAQ?) explaining the difference between BehaveX and behave-parallel (eg https://github.com/hugeinc/behave-parallel or one of its other forks).
Also, a section telling "When you should use BehaveX" would be useful.
Hello,
Does anyone could help on taking screenshots and attaching to behavex auto generated html report?
My scenario is whenever a step gets failed i need to take screenshot and place it html report, currently i could take screenshot for each and every step but facing couple of issues
my environment.py is
def after_step(context, step):
print(step)
print(f"----after step---")
evidence_path = './output/outputs/logs/test.png'
context.driver.save_screenshot(evidence_path)
print("stored screenshot successfully"
My POC in python was successful along with this great plugin, i just got blocked with this issue alone, it would be very helpful if you could help on this
behavex -t @abc --parallel-processes 3 --parallel-scheme feature - Running this behavex cmd is not generating the reports by feature level.
Behavex report not showing the test case execution for feature level.Is there a way to generate total cases by feature?
My feature files has lot of workflow. Each feature has 10 scenario . Instead of generating report for every scenario. I just want to generate one test case passed.
Is there a way to do it using behavex?
failures.txt contains one line:
features/Create CHG/Different Tabs for different Change Categories.feature:14
calling behavex -rf considers spaces to mean there are different tests.
C:\Source\Repos\jira-automated-testing>behavex -rf
|--------------------| ------------------------------------------------------------|
|ENV. VARIABLE | VALUE |
|--------------------| ------------------------------------------------------------|
|HOME | C:\Source\Repos\jira-automated-testing |
|CONFIG | C:\Program Files\Python310\Lib\site-packages\behavex\conf_behavex.cfg|
|OUTPUT | C:\Source\Repos\jira-automated-testing\output |
|TAGS | None |
|PARALLEL_SCHEME | scenario |
|PARALLEL_PROCESSES | 1 |
|TEMP | C:\Source\Repos\jira-automated-testing\output\temp |
|LOGS | C:\Source\Repos\jira-automated-testing\output\outputs\logs |
|LOGGING_LEVEL | INFO |
|--------------------| ------------------------------------------------------------|
There are no failing test scenarios to run.
C:\Source\Repos\jira-automated-testing>behavex -rf
|--------------------| ------------------------------------------------------------|
|ENV. VARIABLE | VALUE |
|--------------------| ------------------------------------------------------------|
|HOME | C:\Source\Repos\jira-automated-testing |
|CONFIG | C:\Program Files\Python310\Lib\site-packages\behavex\conf_behavex.cfg|
|OUTPUT | C:\Source\Repos\jira-automated-testing\output |
|TAGS | None |
|PARALLEL_SCHEME | scenario |
|PARALLEL_PROCESSES | 1 |
|TEMP | C:\Source\Repos\jira-automated-testing\output\temp |
|LOGS | C:\Source\Repos\jira-automated-testing\output\outputs\logs |
|LOGGING_LEVEL | INFO |
|--------------------| ------------------------------------------------------------|
The path "C:\Source\Repos\jira-automated-testing\features\Create" was not found.
'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\features\Create" was not found.
'
The path "C:\Source\Repos\jira-automated-testing\CHG\Different" was not found.
'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\CHG\Different" was not found.
'
The path "C:\Source\Repos\jira-automated-testing\Tabs" was not found.
'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\Tabs" was not found.
'
The path "C:\Source\Repos\jira-automated-testing\for" was not found.
'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\for" was not found.
'
The path "C:\Source\Repos\jira-automated-testing\different" was not found.
'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\different" was not found.
'
The path "C:\Source\Repos\jira-automated-testing\Change" was not found.
'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\Change" was not found.
'
The path "C:\Source\Repos\jira-automated-testing\Categories.feature" was not found.
'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\Categories.feature" was not found.
'
InvalidFilenameError: C:/Source/Repos/jira-automated-testing/features/Create
Exit code: 0
C:\Source\Repos\jira-automated-testing>behavex -rf
|--------------------| ------------------------------------------------------------|
|ENV. VARIABLE | VALUE |
|--------------------| ------------------------------------------------------------|
|HOME | C:\Source\Repos\jira-automated-testing |
|CONFIG | C:\Program Files\Python310\Lib\site-packages\behavex\conf_behavex.cfg|
|OUTPUT | C:\Source\Repos\jira-automated-testing\output |
|TAGS | None |
|PARALLEL_SCHEME | scenario |
|PARALLEL_PROCESSES | 1 |
|TEMP | C:\Source\Repos\jira-automated-testing\output\temp |
|LOGS | C:\Source\Repos\jira-automated-testing\output\outputs\logs |
|LOGGING_LEVEL | INFO |
|--------------------| ------------------------------------------------------------|
The path "C:\Source\Repos\jira-automated-testing"features\Create" was not found.
'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing"features\Create" was not found.
'
The path "C:\Source\Repos\jira-automated-testing\CHG\Different" was not found.
'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\CHG\Different" was not found.
'
The path "C:\Source\Repos\jira-automated-testing\Tabs" was not found.
'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\Tabs" was not found.
'
The path "C:\Source\Repos\jira-automated-testing\for" was not found.
'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\for" was not found.
'
The path "C:\Source\Repos\jira-automated-testing\different" was not found.
'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\different" was not found.
'
The path "C:\Source\Repos\jira-automated-testing\Change" was not found.
'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\Change" was not found.
'
The path "C:\Source\Repos\jira-automated-testing\Categories.feature" was not found.
'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\Categories.feature" was not found.
'
ConfigError: No steps directory in 'C:\Source\Repos\jira-automated-testing\"features\Create'
Exit code: 0
C:\Source\Repos\jira-automated-testing>
The execution logs and screenshots are not being displayed when scenario outline description contains quotes.
It would be great to be able to rerun a failed test or at least be able to run the failed tests and incorporate it into the existing log.
Is your feature request related to a problem? Please describe.
As a user with feature files that have very different durations, a naive distribution across parallel branches can result in sub-optimal total runtime. This can happen if larger features are left to the end of a run causing one of the workers to lag behind after all the others have completed.
By sorting features by rough estimate of size, the larger features can be executed first, leading to a more even distribution and a more efficient use of runtime; ideally as close to total duration / processes
as possible.
Describe the solution you'd like
Based on a brief reading of the code, it seems like merely sorting the features list before starting execution would be enough to get this working. Ideally the mechanism for sorting the features would be easily pluggable by user code in their environment.py
. This would allow users to apply whatever heuristic makes sense for their environment.
Describe alternatives you've considered
I have in the past managed this optimization by hand, external to the behave process. However, behavex seems perfectly situated to provide this value in a more cohesive way.
Additional context
It would be good to display the information about wall time and efficiency in the HTML report as well.
Describe the bug
When invoking behavex with --include option followed by a feature file name, the following error occurred:
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Like invoking original behave command, when --include option followed by a feature file name are given, behavex should run the given feature file successfully.
Desktop (please complete the following information):
Smartphone (please complete the following information):
N/A
Additional context
N/A
Describe the bug
After installing behavex I attempted to execute per the documented command (python -m behavex -t @example --parallel-processes 2 --parallel-scheme scenario) and received the error "No module named behavex.main; 'behavex' is a package and cannot be directly executed".
I attempted to run just the behavex command (python -m behavex) and received the same error. Please advise how I can correct this issue.
I verified the install of behavex was successful (see below):
**Note: all personal/company data has been removed
Microsoft Windows [Version 10.0.19042.1706]
(c) Microsoft Corporation. All rights reserved.
C:\Program Files\Python380\Scripts>python -m pip install behavex
Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: https://*****/artifactory/api/pypi/pypi-dev/simple
Collecting behavex
Downloading https://******/artifactory/api/pypi/pypi-dev/packages/packages/a7/eb/3b1655d5fcc45f68897f1ddf3b384f96217de4f9e7a344ee11cf9ba9b8cb/behavex-1.5.10.tar.gz (467 kB)
|████████████████████████████████| 467 kB 3.3 MB/s
Preparing metadata (setup.py) ... done
Requirement already satisfied: behave==1.2.6 in c:\users*\appdata\roaming\python\python38\site-packages (from behavex) (1.2.6)
Requirement already satisfied: jinja2 in c:\users*\appdata\roaming\python\python38\site-packages (from behavex) (3.0.3)
Collecting configobj
Downloading https://******/artifactory/api/pypi/pypi-dev/64/61/079eb60459c44929e684fa7d9e2fdca403f67d64dd9dbac27296be2e0fab/configobj-5.0.6.tar.gz (33 kB)
Preparing metadata (setup.py) ... done
Collecting htmlmin
Downloading https://******/artifactory/api/pypi/pypi-dev/b3/e7/fcd59e12169de19f0131ff2812077f964c6b960e7c09804d30a7bf2ab461/htmlmin-0.1.12.tar.gz (19 kB)
Preparing metadata (setup.py) ... done
Collecting csscompressor
Downloading https://*******/artifactory/api/pypi/pypi-dev/packages/packages/f1/2a/8c3ac3d8bc94e6de8d7ae270bb5bc437b210bb9d6d9e46630c98f4abd20c/csscompressor-0.9.5.tar.gz (237 kB)
|████████████████████████████████| 237 kB 6.4 MB/s
Preparing metadata (setup.py) ... done
Requirement already satisfied: parse-type>=0.4.2 in c:\users*\appdata\roaming\python\python38\site-packages (from behave==1.2.6->behavex) (0.6.0)
Requirement already satisfied: six>=1.11 in c:\users*\appdata\roaming\python\python38\site-packages (from behave==1.2.6->behavex) (1.16.0)
Requirement already satisfied: parse>=1.8.2 in c:\users*\appdata\roaming\python\python38\site-packages (from behave==1.2.6->behavex) (1.19.0)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users***\appdata\roaming\python\python38\site-packages (from jinja2->behavex) (2.1.0)
Using legacy 'setup.py install' for behavex, since package 'wheel' is not installed.
Using legacy 'setup.py install' for configobj, since package 'wheel' is not installed.
Using legacy 'setup.py install' for csscompressor, since package 'wheel' is not installed.
Using legacy 'setup.py install' for htmlmin, since package 'wheel' is not installed.
Installing collected packages: htmlmin, csscompressor, configobj, behavex
Running setup.py install for htmlmin ... done
Running setup.py install for csscompressor ... done
Running setup.py install for configobj ... done
Running setup.py install for behavex ... done
Successfully installed behavex-1.5.10 configobj-5.0.6 csscompressor-0.9.5 htmlmin-0.1.12
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Expected behave feature files and associated code to be executed
Desktop (please complete the following information):
Questions
parallel process =4
i want run parallel 4 browser at time
Sometimes the features with single quote are not escaped correctly when the report is generated
var scatterDataSteps = {
type: 'scatter',
data: {
datasets: [{
labels: ['we visit google','it should have a searchbox','it should have a "Google Search" button',
** 'it should have a "I'm Feeling Lucky" button' **
,],
label: 'Passed',
data: [{ x: 1, y: 0.0 },{ x: 1, y: 0.0 },{ x: 1, y: 0.0 },{ x: 1, y: 0.0 },],
backgroundColor: green
},{
labels: [],
label: 'Failed',
data: [],
backgroundColor: red
}],
},
This causes an invalid script as there are unpaired quotes breaking the js code, on the ui we cannot expand the feature table neither open the metrics
The problems happens intermitently, video is attached
To Reproduce
Steps to reproduce the behavior:
The feature table and metrics are not working because of this
Report working correctly until sec 28
https://user-images.githubusercontent.com/18053028/163697624-4ec28521-209f-4126-9346-099e10d121dc.mp4
Describe the bug
I have a directory that has different sub folders where one of them has features and tests related to Behave.
Using Behave package, I can run my feature pointing to my sub folder as per below command..
behave behaveSample/features --tags=dropdownTest
But when i try to do the same using behaveX wrapper, it is not finding my features and throwing error as unrecognized arguments.
I couldn't find anything from documentation about how to set the path to features folder via behaveX CLI.
Plz help me on this.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
BehaveX should able to find the features directory located inside sub directories or at least there should be a way to mention the directory path like in behave.
Screenshots
With behave where features are found:
With behaveX where features are not found:
Desktop (please complete the following information):
In my application under test it is impossible to run two browser sessions with the same user (security reason). So to run tests in parallel I have to use pull of users.
The solution is usual:
However it is not working with behave context. Is there any solution to use common variables for parallel tests execution in behavex?
Describe the bug
I'm trying to use behavex to run tests in parallel but I've found that before_tag
hook in my environment.py
is not invoked.
To Reproduce
Steps to reproduce the behavior:
before_tag
hookcontext
in that before_tag
hookExpected behavior
The before_tag
is executed.
Desktop (please complete the following information):
Additional context
The problem is there is no invocation of before_tag
in
behavex/behavex/environment.py
Line 39 in 04102b1
Maybe there can be a more generic invocation by always invoking the runner
and have a logic when behavex
hooks needs to be invoke by checking if the hook is before or after.
There is a typo in the readme https://github.com/hrcorval/behavex#parallel-test-executions
In two of the example sections:
--parallel-schema
should be --parallel-scheme
Also in the dry run section https://github.com/hrcorval/behavex#additional-behavex-arguments
teh
should be the
:)
I use the command:
behavex -t @tag --parallel-processes 4 --parallel-scheme scenario -D market=DK --output-folder reports/DK
and in HTML or JSON response, I can't find the background steps.
Firstly, thanks for this!
I am testing with checking port availability and I wanted to fetch new ports and somehow communicate that specific port is not in the pool anymore.
I am trying with a few mechanisms to control that better, what do you recommend?
Hi,
We are running our automation inside docker.
All reports are working fine except the evidence.
Evidence is generated in the backend, but link not appears and not linking the evidence file.
But same script works fine for non docker environment.
Please help.
Hi,
is there any chance in the future to have the html report as a formatter to be used with basic behave? Or maybe I can use it already?
Thanks,
Luca
Is there a way to show the total number features in the behavex report?
Describe the solution you'd like
Right now behavex report is showing total scenarios. Is there a way to show the total features?
Is your feature request related to a problem? Please describe.
I tried to run behavex outside the test directory and every time I got the information: "features" folder was not found in current path...
. Flag -ip
does not help with finding the path. Even manually setting the environment variable FEATURES_PATH
does not help because in the init file this variable is overwriting due to: os.environ['FEATURES_PATH'] = os.path.join(os.getcwd(), 'features')
Describe the solution you'd like
It would be nice to set path to the feature directory in the command run, for example:
behavex directory_a/directory_b/features
Describe alternatives you've considered
Allow setting manually the environment variable FEATURES_PATH
Additional context
No additional context
Cant install this
pip install behavex
results in
ERROR: Could not find a version that satisfies the requirement behavex (from versions: none)
ERROR: No matching distribution found for behavex
I was able to run bahavex in my jenkins job, but the job results are always of "blue" icon, this is quite different from my previous running, as this "blue" icon means the test cases are 100% succeeded, but this time, I guess it is for the execution is 100% finished, but however the result looks, it would be within the report summary.
Anyway, my extra impression on this tool is:
Thanks,
Chun
Is your feature request related to a problem? Please describe.
Noticed that we do not see the feature file steps being implemented on the console when we run behavex
. Wondering if i am missing some config or parameter to be passed along with behavex -t @staging_tests
Describe the solution you'd like
when we run behavex -t @staging_tests1
or python -m behavex.runner -t @staging_tests1
we should see the feature file and step being executed shown on the console for mac/ubuntu similar to behave
eg
% behave -t @staging_tests
@staging_tests
Feature: ABC
As a user
....
Scenario: Test
Given I perform x
When I do y
Then I see z
1 feature passed, 0 failed, 27 skipped
1 scenario passed, 0 failed, 115 skipped
10 steps passed, 0 failed, 611 skipped, 0 undefined
Took 0m1.402s
Currently we only see
+ python -m behavex.runner -t @staging_tests --parallel-processes 3 --logging-level=DEBUG
|--------------------| ------------------------------------------------------------|
|ENV. VARIABLE | VALUE |
|--------------------| ------------------------------------------------------------|
|HOME | /var/lib/jenkins |
|CONFIG | /var/lib/jenkins/.local/lib/python3.8/site-packages/behavex/conf_behavex.cfg|
|OUTPUT | /var/lib/jenkins/workspace/API/tests/behave_tests/output|
|TAGS | @staging_tests1 |
|PARALLEL_SCHEME | scenario |
|PARALLEL_PROCESSES | 3 |
|TEMP | /var/lib/jenkins/workspace/API/tests/behave_tests/output/temp|
|LOGS | /var/lib/jenkins/workspace/API/tests/behave_tests/output/outputs/logs|
|LOGGING_LEVEL | None |
|--------------------| ------------------------------------------------------------|
************************************************************
Running parallel scenarios
************************************************************
Total execution time: 0.8s
Describe alternatives you've considered
I have considered printing out the step definition on the console as INFO level. I did not see it displayed on Jenkins ubuntu. Need to check if i need to add some other parameters for it to be displayed there
Apologies if I have missed out anything or if this already exists. Thanks for implementing behavex and looking into this.
Describe the bug
def after_scenario(context, scenario):
context.browser.save_screenshot(context.evidence_path)
context doesn't have an evidence_path
AttributeError("'Context' object has no attribute 'evidence_path'")
behavex 1.5.3
Windows 10
python 3.10.2
Describe the bug
We found that tests will be skipped with certain behavex
configuration parameters and where a scenario outline's Examples:
contains additional text afterwards. E.g. Examples: blah
. This led to some tests being skipped without us realizing for a while.
To Reproduce
Example scenario that will be skipped under certain conditions:
Feature: Test
Scenario Outline: scenario 1
Given <something>
Examples: blah
| something |
| something_a |
Scenario will be skipped if these conditions are true:
scenario
With behavex
, these won't be skipped if:
feature
instead of scenario
Examples:
Expected behavior
It's unclear whether this is invalid syntax, however these tests are run successfully with behave
.
As behavex
is supposed to be a wrapper around behave
then I think these tests should be allowed to run. Otherwise I would prefer the tests to fail rather than be skipped.
Desktop (please complete the following information):
@hrcorval @balaji2711 @Remox129 @anibalinn
Behavex command not working as per documentation...pls. advise
tried all possibilities:
behavex -ip ./tests/wms/autotest/web/features -t InventorySyncVarianceConfiguration_ICN --color -o .
behavex -t InventorySyncVarianceConfiguration_ICN --color -o .
behavex -t @InventorySyncVarianceConfiguration_ICN --color -o .
behavex -t @InventorySyncVarianceConfiguration_ICN
Describe the bug
Currently, you cannot install this package for Python3.9 .
The reason for this is the description in setup.py
where python3.9 is explicitly disabled.
To Reproduce
Steps to reproduce the behavior:
behavex
with pip
which will fail# — EXAMPLE FOR: UNIX platform with bash as shell
$ virtualenv -p python3.9 .venv_py39
$ source .venv_py39/bin/activate
$ pip install behavex
…
ERROR: Package ‚behavex‘ requires a different Python: 3.9.x not in ‚!=3.9.x,>=3‘
Expected behavior
Normally, I would expect that Python3.9 (and Python3.10, …) are supported.
Desktop (please complete the following information):
Hi,
Good morninig!! Hope you are doing good!!
I have created an framework where i need to execute my test cases in parallel, here i am getting wrong screenshots in my output folder ie) Under both testcases i am getting one single screenshot file attached for both Login1 & Login2 feature
Login1.feature
Examples
| username | password |
| standard_user1 | secret_sauce |
| standard_user | secret_sauce |
Login2.feature
Examples
| username | password |
| standard_user2 | secret_sauce |
| standard_user | secret_sauce |
Note: Here below test cases should fail and respective screenshot should be published but i am getting same screenshot [standard_user1] in both file
Login1.feature -->standard_user1
Login2.feature -->standard_user2
Output
I have shared my demo project, please let me know if you could help on this
https://github.com/prsnth89/PythonBehaveTest.git
I am running behave using python instead of command line. Behave implementation have a main method that I imported which arguments such as:
behave.__main__.main(args=['-t @Tag1'], '--no-skipped')
Is there a way I can add arguments to behaveX main module so I can use it from python runner?
Also I don't see -no--skipped
argument in list of supported arguments.
"There might be more arguments that can be supported, it is just a matter of adapting the wrapper implementation to use these."
Is it possible to add this argument? If so, how?
Hello,
First of all, I would like to thank you for developing this wonderful wrapper :) It saves a lot of time. Kudos to your team.
With help of context.evidence_path, I'm able to attach the screen-shot in the report by using the below code -
def after_step(context, step):
if data["testConfig"]["execute"] == "Ui":
if step.status == "failed":
context.driver.save_screenshot(context.evidence_path + './'+step.name+'.png')
Could anyone of you please let me know, What other evidence can show it in to test report other than a screenshot?- To add further, I'm very much curious about the 3 icon (Please refer the attached screen-shot)
Regards,
Balaji.
There is not a method called 'get_string_hash' inside "report_utils" file
Describe the bug
If we have a step which is not defined yet the status of that step should be "undefined" or "pending" and all other steps followed by that should have status as "skipped". which works as expected. And the status of that scenario should be "undefined" or "pending" or "skipped" (expected). However the status is "failed" for such scenario (actual).
To Reproduce
Steps to reproduce the behavior:
Expected behavior
In report the status of this scenario should be "undefined" or "pending" or "skipped".
Desktop (please complete the following information):
Is your feature request related to a problem? Please describe.
Right now we are running our test in parallel mode and we need to upload the installer once and share the installer ID with other threads. I have kept the code to upload the installer in the before_all method but the issue is that before_all gets triggered for every thread, every thread tries to upload the installer which sometimes results in an error and fails the code.
Describe the solution you'd like
I would love to have a method which gets executed only once before all the threads. I can use the same to upload the installer and share it with all the threads. Just like what we have in testNG before_suite
Describe alternatives you've considered
I tried handling using the code but it is not working. I tried the lock, synchronized method, and threading concept but it is still sharing.
Additional context
None
The tests would be very efficient if the browser could be kept open between runs. The parallel is slow to an extent because the browser needs to relogin each time.
Was trying to embed made screenshot into report but no luck. Is it possible? If yes then how? Can you update readme file with this information?
Is your feature request related to a problem? Please describe.
Basically, the evidence path for every scenario is created based on the scenario description. But this is generating conflicts when 2 scenarios have the same name.
Describe the solution you'd like
We should append the feature description when calculating the evidence_path
Describe alternatives you've considered
N/A
Additional context
Update the create_log_path method calls to concatenate scenario and feature descriptions
For a step like the one below,
Then The text is valid
"""
This text is valid
"""
the report would show:
Given I am set up something
When I do something
Then The text is valid
This text is valid
Scenarios which were implemented as scenario outline were not executed. Their status in report is Automated Not Run
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.