Coder Social home page Coder Social logo

hrcorval / behavex Goto Github PK

View Code? Open in Web Editor NEW
75.0 3.0 19.0 683 KB

BDD test wrapper on top of Behave for parallel test executions and more!

Home Page: https://github.com/hrcorval/behavex

License: MIT License

Python 100.00%
bdd python agile-testing parallel testing automated-testing behave behavex agile gherkin

behavex's Introduction

BehaveX

BehaveX is a test wrapper on top of Python Behave that provides additional capabilities to improve testing pipelines.

BehaveX can be used to build testing pipelines from scratch using the same Behave framework principles, or to expand the capabilities of Behave-based projects.

Features provided by BehaveX

  • Perform parallel test executions
    • Execute tests using multiple processes, either by feature or by scenario.
  • Get additional test execution reports
    • Generate friendly HTML reports and JSON reports that can be exported and integrated with third-party tools
  • Provide additional evidence as part of execution reports
    • Include any testing evidence by pasting it to a predefined folder path associated with each scenario. This evidence will then be automatically included as part of the HTML report
  • Generate test logs per scenario
    • Any logs generated during test execution using the logging library will automatically be compiled into an individual log report for each scenario
  • Mute test scenarios in build servers
    • By just adding the @MUTE tag to test scenarios, they will be executed, but they will not be part of the JUnit reports
  • Generate metrics in HTML report for the executed test suite
    • Automation Rate, Pass Rate and Steps executions & duration
  • Execute dry runs and see the full list of scenarios into the HTML report
    • This is enhanced implementation of Behave's dry run feature, allowing you to see the full list of scenarios in the HTML report without actually executing the tests
  • Re-execute failing test scenarios
    • By just adding the @AUTORETRY tag to test scenarios, so when the first execution fails the scenario is immediately re-executed
    • Additionally, you can provide the wrapper with a list of previously failing scenarios, which will also be re-executed automatically

test execution report

test execution report

test execution report

Installing BehaveX

Execute the following command to install BehaveX with pip:

pip install behavex

Executing BehaveX

The execution is performed in the same way as you do when executing Behave from command line, but using the "behavex" command.

Examples:

Run scenarios tagged as TAG_1 but not TAG_2:

behavex -t @TAG_1 -t ~@TAG_2

Run scenarios tagged as TAG_1 or TAG_2:

behavex -t @TAG_1,@TAG_2

Run scenarios tagged as TAG_1, using 4 parallel processes:

behavex -t @TAG_1 --parallel-processes 4 --parallel-scheme scenario

Run scenarios located at "features/features_folder_1" and "features/features_folder_2" folders, using 2 parallel processes

behavex features/features_folder_1 features/features_folder_2 --parallel-processes 2

Run scenarios from "features_folder_1/sample_feature.feature" feature file, using 2 parallel processes

behavex features_folder_1/sample_feature.feature --parallel-processes 2

Run scenarios tagged as TAG_1 from "features_folder_1/sample_feature.feature" feature file, using 2 parallel processes

behavex features_folder_1/sample_feature.feature -t @TAG_1 --parallel-processes 2

Run scenarios located at "features/feature_1" and "features/feature_2" folders, using 2 parallel processes

behavex features/feature_1 features/feature_2 --parallel-processes 2

Run scenarios tagged as TAG_1, using 5 parallel processes executing a feature on each process:

behavex -t @TAG_1 --parallel-processes 5 --parallel-scheme feature

Perform a dry run of the scenarios tagged as TAG_1, and generate the HTML report:

behavex -t @TAG_1 --dry-run

Run scenarios tagged as TAG_1, generating the execution evidence into the "exec_evidence" folder (instead of the default "output" folder):

behavex -t @TAG_1 -o execution_evidence

Constraints

  • BehaveX is currently implemented on top of Behave v1.2.6, and not all Behave arguments are yet supported.
  • The parallel execution implementation is based on concurrent Behave processes. Therefore, any code in the before_all and after_all hooks in the environment.py module will be executed in each parallel process. The same applies to the before_feature and after_feature hooks when the parallel execution is set by scenario.

Additional Comments

  • The JUnit reports have been replaced by the ones generated by the test wrapper, just to support muting tests scenarios on build servers

Supported Behave arguments

  • no_color
  • color
  • define
  • exclude
  • include
  • no_snippets
  • no_capture
  • name
  • capture
  • no_capture_stderr
  • capture_stderr
  • no_logcapture
  • logcapture
  • logging_level
  • summary
  • quiet
  • stop
  • tags
  • tags-help

IMPORTANT: It worth to mention that some arguments do not apply when executing tests with more than one parallel process, such as stop, color, etc.

Also, there might be more arguments that can be supported, it is just a matter of extending the wrapper implementation to use these.

Specific arguments from BehaveX

  • output-folder (-o or --output-folder)
    • Specifies the output folder where execution reports will be generated (JUnit, HTML and JSon)
  • dry-run (-d or --dry-run)
    • Overwrites the existing Behave dry-run implementation
    • Performs a dry-run by listing the scenarios as part of the output reports
  • parallel-processes (--parallel-processes)
    • Specifies the number of parallel Behave processes
  • parallel-scheme (--parallel-scheme)
    • Performs the parallel test execution by [scenario|feature]

You can take a look at the provided examples (above in this documentation) to see how to use these arguments.

Parallel test executions

The implementation for running tests in parallel is based on concurrent executions of Behave instances in multiple processes.

As mentioned as part of the wrapper constraints, this approach implies that whatever you have in the Python Behave hooks in environment.py module, it will be re-executed on every parallel process.

BehaveX will be in charge of managing each parallel process, and consolidate all the information into the execution reports

Parallel test executions can be performed by feature or by scenario.

Examples:

behavex --parallel-processes 3

behavex -t @<TAG> --parallel-processes 3

behavex -t @<TAG> --parallel-processes 2 --parallel-scheme scenario

behavex -t @<TAG> --parallel-processes 5 --parallel-scheme feature

When the parallel-scheme is set by feature, all tests within each feature will be run sequentially.

Test execution reports

HTML report

This is a friendly test execution report that contains information related to test scenarios, execution status, execution evidence and metrics. A filters bar is also provided to filter scenarios by name, tag or status.

It should be available by default at the following path:

<output_folder>/report.html

JSON report

Contains information about test scenarios and execution status.

It should be available by default at the following path:

<output_folder>/report.json

The report is provided to simplify the integration with third party tools, by providing all test execution data in a format that can be easily parsed.

JUnit report

The wrapper overwrites the existing Behave JUnit reports, just to enable dealing with parallel executions and muted test scenarios

By default, there will be one JUnit file per feature, unless the parallel execution is performed by scenario, in which there will be one JUnit file per scenario.

Reports are available by default at the following path:

<output_folder>/behave/*.xml

Attaching additional execution evidence to test report

It is considered a good practice to provide as much as evidence as possible in test executions reports to properly identify the root cause of issues.

Any evidence file you generate when executing a test scenario, it can be stored into a folder path that the wrapper provides for each scenario.

The evidence folder path is automatically generated and stored into the "context.evidence_path" context variable. This variable is automatically updated by the wrapper before executing each scenario, and all the files you copy into that path will be accessible from the HTML report linked to the executed scenario

Test logs per scenario

The HTML report provides test execution logs per scenario. Everything that is being logged using the logging library will be written into a test execution log file linked to the test scenario.

Metrics

  • Automation Rate
  • Pass Rate
  • Steps execution counter and average execution time

All metrics are provided as part of the HTML report

Dry runs

The wrapper overwrites the exiting Behave dry run implementation just to be able to provide the outputs into the wrapper reports.

The HTML report generated as part of the dry run can be used to share the scenarios specifications with any stakeholder.

Example:

behavex -t @TAG --dry-run

Muting test scenarios

Sometimes it is necessary that failing test scenarios continue being executed in all build server plans, but having them muted until the test or product fix is provided.

Tests can be muted by adding the @MUTE tag to each test scenario. This will cause the wrapper to run the test but the execution will not be notified in the JUnit reports. However, you will see the execution information in the HTML report.

What to do with failing scenarios?

@AUTORETRY tag

This tag can be used for flaky scenarios or when the testing infrastructure is not stable at all.

The @AUTORETRY tag can be applied to any scenario or feature, and it is used to automatically re-execute the test scenario when it fails.

Rerun all failed scenarios

Whenever you perform an automated test execution and there are failing scenarios, the failing_scenarios.txt file will be created into the execution output folder. This file allows you to run all failing scenarios again.

This can be done by executing the following command:

behavex -rf ./<OUTPUT_FOLDER>/failing_scenarios.txt

or

behavex --rerun-failures ./<OUTPUT_FOLDER>/failing_scenarios.txt

To avoid the re-execution to overwrite the previous test report, we suggest to provide a different output folder, using the -o or --output-folder argument.

It is important to mention that this argument doesn't work yet with parallel test executions

Show Your Support

If you find this project helpful or interesting, we would appreciate it if you could give it a star (:star:). It's a simple way to show your support and let us know that you find value in our work.

By starring this repository, you help us gain visibility among other developers and contributors. It also serves as motivation for us to continue improving and maintaining this project.

Thank you in advance for your support! We truly appreciate it.

behavex's People

Contributors

anibalinn avatar balaji2711 avatar hrcorval avatar ido-ran avatar remoyukoff avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

behavex's Issues

Sort features to optimize runtime

Is your feature request related to a problem? Please describe.
As a user with feature files that have very different durations, a naive distribution across parallel branches can result in sub-optimal total runtime. This can happen if larger features are left to the end of a run causing one of the workers to lag behind after all the others have completed.

By sorting features by rough estimate of size, the larger features can be executed first, leading to a more even distribution and a more efficient use of runtime; ideally as close to total duration / processes as possible.

Describe the solution you'd like
Based on a brief reading of the code, it seems like merely sorting the features list before starting execution would be enough to get this working. Ideally the mechanism for sorting the features would be easily pluggable by user code in their environment.py. This would allow users to apply whatever heuristic makes sense for their environment.

Describe alternatives you've considered
I have in the past managed this optimization by hand, external to the behave process. However, behavex seems perfectly situated to provide this value in a more cohesive way.

Additional context
It would be good to display the information about wall time and efficiency in the HTML report as well.

runner: Parses feature-files that are not provided

Describe the bug
The runner tries to run feature-files that where not provided as command-line args (but only reside in the the same directory).

RELATED TO: #80

To Reproduce
Steps to reproduce the behavior:
0. Instrument utils.should_feature_be_run() by providing a print statement

def should_feature_be_run(path_feature):
    print("XXX should_feature_be_run: %s" % path_feature)
    ... # ORIGINAL CODE is here.
  1. Run behave features/runner.*.feature (from: behave repo)
  2. Observe the lines with XXX: ... like:
    GOOD_CASE: XXX: .../features/runner.<SOMETHING>.feature (OK)
    BAD_CASE: XXX: .../features/step_dialect.generic_steps.feature (not containing features/runner.)
    NOTE: All of the BAD_CASES where not passed as command-line args.

Expected behavior
Try to only files/paths that were provided as command-line args.
Otherwise, you are wasting CPU unnecessarily.

Version Info:

  • behavex v2.0.2 (current HEAD of repository)

Additional context
Add any other context about the problem here.

how this behavex being invoked in jenkins job ?

I was able to run bahavex in my jenkins job, but the job results are always of "blue" icon, this is quite different from my previous running, as this "blue" icon means the test cases are 100% succeeded, but this time, I guess it is for the execution is 100% finished, but however the result looks, it would be within the report summary.

Anyway, my extra impression on this tool is:

  1. It is much faster to have the test cases executed with behavex.
  2. I have seen extra test cases failure during the run-time, which I don't understand, because the test cases I have selected does not have depends on each other. it seems more frequent intermit failures to me.

Thanks,

Chun

Behavex exits almost instantly when running parallel processes for tags

I'm trying to use Behavex to concurrently execute several different feature files. To identify the feature files I want run, I've used tags. However, whenever I try to run Behavex it almost instantly exits (I don't even reach my before_all)

This is the command I am using: behavex -texample -ttest --parallel-processes 2 --prarallel-scheme feature
Where I have two separate feature files tagged with @example and @test respectively.
The problem only occurs when I specify more than 1 tag in the command.
E.g. behavex -texample --parallel-processes 2 --prarallel-scheme feature runs fine for me.

I'm using Windows 10 with Python 3.10.0, behave 1.2.6 and behavex 1.6.0

Not able to run the tests from root directory using behaveX which is possible using behave

Describe the bug
I have a directory that has different sub folders where one of them has features and tests related to Behave.

Using Behave package, I can run my feature pointing to my sub folder as per below command..
behave behaveSample/features --tags=dropdownTest

But when i try to do the same using behaveX wrapper, it is not finding my features and throwing error as unrecognized arguments.
image

I couldn't find anything from documentation about how to set the path to features folder via behaveX CLI.
Plz help me on this.

To Reproduce
Steps to reproduce the behavior:

  1. Create a directory
  2. Have your features inside a sub directory/folder
  3. Now try to run the test using behaveX wrapper. "You will find a error that features folder is not part of the directory"
  4. Now try mentioning the sub directory followed by behaveX command before tags (-t) flag, you will seen an error as unrecognized argument.

Expected behavior
BehaveX should able to find the features directory located inside sub directories or at least there should be a way to mention the directory path like in behave.

Screenshots
With behave where features are found:
image

With behaveX where features are not found:
image

Desktop (please complete the following information):

  • OS: Windows 10 pro
  • Browser : Chrome
  • Version : BehaveX 1.6.0

Use behave.ini

I have specified a path where my features are in behave.ini

Behavex ignores this. Id like to give Behavex a directory or path where my features reside

No matching distribution found for behavex

Cant install this

pip install behavex results in

ERROR: Could not find a version that satisfies the requirement behavex (from versions: none)
ERROR: No matching distribution found for behavex

behave.context global variables are not working in parallel execution

In my application under test it is impossible to run two browser sessions with the same user (security reason). So to run tests in parallel I have to use pull of users.

The solution is usual:

  1. set pull of users to global variable as a dict() (e.g: context.users)
  2. pick one user from the pull of users
  3. remove this user from the global variable
  4. run test
  5. add user after test execution to common pull

However it is not working with behave context. Is there any solution to use common variables for parallel tests execution in behavex?

how to run scenario outline and scenario in parallel

Questions

  1. Can you tell me how run to scenario outline in parallel using behavex
  2. Can you also tell me how to run scenario outline and scenario in parallel
    for example scenario outline contains 3 user id and password. and one scenario

parallel process =4
i want run parallel 4 browser at time

Missing background steps in the in the scenarios

I use the command:
behavex -t @tag --parallel-processes 4 --parallel-scheme scenario -D market=DK --output-folder reports/DK
and in HTML or JSON response, I can't find the background steps.

before_tag hook is not invoked

Describe the bug
I'm trying to use behavex to run tests in parallel but I've found that before_tag hook in my environment.py is not invoked.

To Reproduce
Steps to reproduce the behavior:

  1. Setup minimum behave test
  2. Add before_tag hook
  3. Set a value on context in that before_tag hook
  4. Try to access that value in a step implementation

Expected behavior
The before_tag is executed.

Desktop (please complete the following information):

  • OS: Linux

Additional context
The problem is there is no invocation of before_tag in

def run_hook(self, name, context, *args):
.

Maybe there can be a more generic invocation by always invoking the runner and have a logic when behavex hooks needs to be invoke by checking if the hook is before or after.

While running the test in parallel mode, it would have been nice if there are any tag which gets executed only once?

Is your feature request related to a problem? Please describe.
Right now we are running our test in parallel mode and we need to upload the installer once and share the installer ID with other threads. I have kept the code to upload the installer in the before_all method but the issue is that before_all gets triggered for every thread, every thread tries to upload the installer which sometimes results in an error and fails the code.

Describe the solution you'd like
I would love to have a method which gets executed only once before all the threads. I can use the same to upload the installer and share it with all the threads. Just like what we have in testNG before_suite

Describe alternatives you've considered
I tried handling using the code but it is not working. I tried the lock, synchronized method, and threading concept but it is still sharing.

Additional context
None

Features with single quote are not escaped correctly sometimes

Sometimes the features with single quote are not escaped correctly when the report is generated

var scatterDataSteps = {
            type: 'scatter',
            data: {
                datasets: [{
                    labels: ['we visit google','it should have a searchbox','it should have a "Google Search" button',
                   ** 'it should have a "I'm Feeling Lucky" button' **
,],
                    label: 'Passed',
                    data: [{ x: 1, y: 0.0 },{ x: 1, y: 0.0 },{ x: 1, y: 0.0 },{ x: 1, y: 0.0 },],
                    backgroundColor: green
                },{
                    labels: [],
                    label: 'Failed',
                    data: [],
                    backgroundColor: red
                }],
            },

This causes an invalid script as there are unpaired quotes breaking the js code, on the ui we cannot expand the feature table neither open the metrics
The problems happens intermitently, video is attached

To Reproduce
Steps to reproduce the behavior:

  1. Have an step and a feature using it that contains only one single quote on it
    I only was able to reproduce it using one single quote inside two double quotes
    **e.g. 'it should have a "I'm Feeling Lucky" button' **
  2. Run a feature using this step

The feature table and metrics are not working because of this

Report working correctly until sec 28
https://user-images.githubusercontent.com/18053028/163697624-4ec28521-209f-4126-9346-099e10d121dc.mp4

Is there a way to generate behavex report based on feature? If a feature has 20 scenarios then I should see one test case passed

behavex -t @abc --parallel-processes 3 --parallel-scheme feature - Running this behavex cmd is not generating the reports by feature level.

Behavex report not showing the test case execution for feature level.Is there a way to generate total cases by feature?

My feature files has lot of workflow. Each feature has 10 scenario . Instead of generating report for every scenario. I just want to generate one test case passed.

Is there a way to do it using behavex?

Add supported Behave arguments to run Allure Report

Is your feature request related to a problem? Please describe.
Nowadays Allure report is quite a standard for automated tests reports (https://github.com/allure-framework).
To run E2E tests with behave and Allure report, I have to use the following command
behave --format allure --outfile allure-report

Describe the solution you'd like
Is it possible to add -f, --format and -o, --outfile arguments to be supported by behavex?

Describe alternatives you've considered
Currently the only alternative with behavex is to use Junit reports which are quite ugly and doesn't support screenshots attaching

Taking screenshot and attaching to report is not working

Hello,

Does anyone could help on taking screenshots and attaching to behavex auto generated html report?

My scenario is whenever a step gets failed i need to take screenshot and place it html report, currently i could take screenshot for each and every step but facing couple of issues

  1. Should take screenshot only during failures [In java cucumber we have a method like step.isFailed() then take screenshot]
  2. Screenshot image should be attached to report for the failed step

my environment.py is

def after_step(context, step):
print(step)
print(f"----after step---")
evidence_path = './output/outputs/logs/test.png'
context.driver.save_screenshot(evidence_path)
print("stored screenshot successfully"

My POC in python was successful along with this great plugin, i just got blocked with this issue alone, it would be very helpful if you could help on this

[General Query] - What other evidence can show it into test report other than a screenshot

Hello,

First of all, I would like to thank you for developing this wonderful wrapper :) It saves a lot of time. Kudos to your team.

With help of context.evidence_path, I'm able to attach the screen-shot in the report by using the below code -

def after_step(context, step):
if data["testConfig"]["execute"] == "Ui":
if step.status == "failed":
context.driver.save_screenshot(context.evidence_path + './'+step.name+'.png')

Could anyone of you please let me know, What other evidence can show it in to test report other than a screenshot?- To add further, I'm very much curious about the 3 icon (Please refer the attached screen-shot)

image

Regards,
Balaji.

--include option not working

Describe the bug
When invoking behavex with --include option followed by a feature file name, the following error occurred:
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

To Reproduce
Steps to reproduce the behavior:

  1. Install behavex
  2. Run the following command:
    behavex --include "some_feature_file.feature"
  3. The above said error with stack trace is displayed (see screenshot below)

Expected behavior
Like invoking original behave command, when --include option followed by a feature file name are given, behavex should run the given feature file successfully.

Screenshots
image

Desktop (please complete the following information):

  • OS: Windows
  • Browser: Chrome was used for the feature test, but the error occurred before the test could be run
  • Version: Windows 10 21H2

Smartphone (please complete the following information):
N/A

Additional context
N/A

runner: create_scenario_line_references() -- Scenario detection does not work for Non-English languages

Describe the bug
Scenario versus ScenarioOutline detection logic in behave/runner.py is extremely brittle
because it relies on English keyword "Scenario".
This solution will not work in other languages except in English.
In addition, by now Gherkin supports multiple aliases for Scenario (even in English language).
Therefore, it is better to check is-instance-of(Scenario) instead of checking the keyword, like:

# -- FILE: behavex/runner.py
from behave.model import ScenarioOutline   # ADDED
...
def create_scenario_line_references(features):
    ...
    for feature in features:
        ...
        for scenario in feature.scenarios:
            # ORIG: if scenario.keyword == u'Scenario':  # -- PROBLEM-POINT was here
            if not isinstance(scenario, ScenarioOutline):  # NEW_SOLUTION_HERE
                feature_lines[scenario.name] = scenario.line
            else:
                ...

To Reproduce
Steps to reproduce the behavior:

  1. Run behavex features against the behave/tools/test-features/french.feature (using keyword: Scénario (with accept)
  File "/.../behavex/runner.py", line 2xx, in create_scenario_line_references
    for scenario_multiline in scenario.scenarios:
AttributeError: 'Scenario' object has no attribute 'scenarios'

Expected behavior
Scenario detection logic should be independent of keywords and should work for any language.

Version Info:

  • behavex v2.0.2 (current HEAD of repository)

context.evidence_path does not exist

Describe the bug
def after_scenario(context, scenario):
context.browser.save_screenshot(context.evidence_path)

context doesn't have an evidence_path

AttributeError("'Context' object has no attribute 'evidence_path'")

behavex 1.5.3
Windows 10
python 3.10.2

UI tests do not run concurrently even when all required arguments are provided

My Behave framework uses Python 3.10 and behave 1.2.6

I have UI test in 1 feature file with 10 scenarios.

The command I run is:
behavex -t @{my_tag} --parallel-processes 10 --parallel-scheme scenario --no_capture -D browser=chrome -D vault_key={vault_key} -D project_path={project_path}

But still the scenarios are executed sequentially and not concurrently.
1 Chrome browser window opens at a time

Create evidence path based on the scenario description and feature description

Is your feature request related to a problem? Please describe.
Basically, the evidence path for every scenario is created based on the scenario description. But this is generating conflicts when 2 scenarios have the same name.

Describe the solution you'd like
We should append the feature description when calculating the evidence_path

Describe alternatives you've considered
N/A

Additional context
Update the create_log_path method calls to concatenate scenario and feature descriptions

pass a multiprocessing communication to behave

Firstly, thanks for this!

I am testing with checking port availability and I wanted to fetch new ports and somehow communicate that specific port is not in the pool anymore.

I am trying with a few mechanisms to control that better, what do you recommend?

Configurable path to features directory

Is your feature request related to a problem? Please describe.
I tried to run behavex outside the test directory and every time I got the information: "features" folder was not found in current path.... Flag -ip does not help with finding the path. Even manually setting the environment variable FEATURES_PATH does not help because in the init file this variable is overwriting due to: os.environ['FEATURES_PATH'] = os.path.join(os.getcwd(), 'features')

Describe the solution you'd like
It would be nice to set path to the feature directory in the command run, for example:
behavex directory_a/directory_b/features

Describe alternatives you've considered
Allow setting manually the environment variable FEATURES_PATH

Additional context
No additional context

Scenario outline with certain syntax in table leads to tests being skipped in some behavex configurations

Describe the bug
We found that tests will be skipped with certain behavex configuration parameters and where a scenario outline's Examples: contains additional text afterwards. E.g. Examples: blah. This led to some tests being skipped without us realizing for a while.

To Reproduce
Example scenario that will be skipped under certain conditions:

Feature: Test

  Scenario Outline: scenario 1
    Given <something>
    Examples: blah
      | something   |
      | something_a |

Scenario will be skipped if these conditions are true:

  • Number of parallel processes is more than 1
  • Parallel scheme is scenario

With behavex, these won't be skipped if:

  • Use parallel scheme of feature instead of scenario
  • Number of parallel processes is 1
  • Removing the extra text after Examples:

Expected behavior
It's unclear whether this is invalid syntax, however these tests are run successfully with behave.

As behavex is supposed to be a wrapper around behave then I think these tests should be allowed to run. Otherwise I would prefer the tests to fail rather than be skipped.

Desktop (please complete the following information):

  • OS: MacOS Ventura 13.1
  • Browser: Chrome
  • Version [e.g. 22]
  • BehaveX version: 1.6.0
  • Python version: 3.10.8

undefined status of a step is converted to failed status for a scenario

Describe the bug
If we have a step which is not defined yet the status of that step should be "undefined" or "pending" and all other steps followed by that should have status as "skipped". which works as expected. And the status of that scenario should be "undefined" or "pending" or "skipped" (expected). However the status is "failed" for such scenario (actual).

To Reproduce
Steps to reproduce the behavior:

  1. Write a scenario and do not implement one of the step
  2. run the scenario
  3. validate the status of scenario in report
  4. in report the status of this scenario should be "undefined" or "pending" or "skipped".

Expected behavior
In report the status of this scenario should be "undefined" or "pending" or "skipped".

Screenshots
Xml Report:
image

Html Report:
image

Desktop (please complete the following information):

  • OS: [Windows 10]
  • Browser [chrome]
  • Version [98.0.4758.82]

Not showing the feature file being executed on the console when running

Is your feature request related to a problem? Please describe.
Noticed that we do not see the feature file steps being implemented on the console when we run behavex. Wondering if i am missing some config or parameter to be passed along with behavex -t @staging_tests

Describe the solution you'd like
when we run behavex -t @staging_tests1 or python -m behavex.runner -t @staging_tests1
we should see the feature file and step being executed shown on the console for mac/ubuntu similar to behave

eg

%  behave -t @staging_tests 
@staging_tests
Feature: ABC
As a user
....
Scenario: Test
Given I perform x
When I do y
Then I see z

1 feature passed, 0 failed, 27 skipped
1 scenario passed, 0 failed, 115 skipped
10 steps passed, 0 failed, 611 skipped, 0 undefined
Took 0m1.402s

Currently we only see

+ python -m behavex.runner -t @staging_tests --parallel-processes 3 --logging-level=DEBUG
|--------------------| ------------------------------------------------------------|
|ENV. VARIABLE       | VALUE                                                       |
|--------------------| ------------------------------------------------------------|
|HOME                | /var/lib/jenkins                                            |
|CONFIG              | /var/lib/jenkins/.local/lib/python3.8/site-packages/behavex/conf_behavex.cfg|
|OUTPUT              | /var/lib/jenkins/workspace/API/tests/behave_tests/output|
|TAGS                | @staging_tests1                                             |
|PARALLEL_SCHEME     | scenario                                                    |
|PARALLEL_PROCESSES  | 3                                                           |
|TEMP                | /var/lib/jenkins/workspace/API/tests/behave_tests/output/temp|
|LOGS                | /var/lib/jenkins/workspace/API/tests/behave_tests/output/outputs/logs|
|LOGGING_LEVEL       | None                                                        |
|--------------------| ------------------------------------------------------------|
************************************************************
Running parallel scenarios

************************************************************

Total execution time:   0.8s

Describe alternatives you've considered
I have considered printing out the step definition on the console as INFO level. I did not see it displayed on Jenkins ubuntu. Need to check if i need to add some other parameters for it to be displayed there

Apologies if I have missed out anything or if this already exists. Thanks for implementing behavex and looking into this.

Picking wrong screenshot during parallel execution

Hi,

Good morninig!! Hope you are doing good!!

I have created an framework where i need to execute my test cases in parallel, here i am getting wrong screenshots in my output folder ie) Under both testcases i am getting one single screenshot file attached for both Login1 & Login2 feature

Login1.feature
Examples
| username | password |
| standard_user1 | secret_sauce |
| standard_user | secret_sauce |
Login2.feature
Examples
| username | password |
| standard_user2 | secret_sauce |
| standard_user | secret_sauce |

Note: Here below test cases should fail and respective screenshot should be published but i am getting same screenshot [standard_user1] in both file
Login1.feature -->standard_user1
Login2.feature -->standard_user2

Output
I have shared my demo project, please let me know if you could help on this
https://github.com/prsnth89/PythonBehaveTest.git

Python3.9 does not work out-of-the-box

Describe the bug
Currently, you cannot install this package for Python3.9 .
The reason for this is the description in setup.py where python3.9 is explicitly disabled.

To Reproduce
Steps to reproduce the behavior:

  1. Create a virtual environment for Python3.9 and activate it.
  2. Try to install behavexwith pip which will fail
# — EXAMPLE FOR: UNIX platform with bash as shell
$ virtualenv -p python3.9 .venv_py39
$ source .venv_py39/bin/activate
$ pip install behavex
…
ERROR: Package ‚behavex‘ requires a different Python: 3.9.x not in!=3.9.x,>=3‘

Expected behavior
Normally, I would expect that Python3.9 (and Python3.10, …) are supported.

Desktop (please complete the following information):

  • OS: macOS (but this is irrelevant; error is not platform related)

rerun failures argument is not properly parsing the failures.txt

failures.txt contains one line:

features/Create CHG/Different Tabs for different Change Categories.feature:14

calling behavex -rf considers spaces to mean there are different tests.

C:\Source\Repos\jira-automated-testing>behavex -rf
|--------------------| ------------------------------------------------------------|
|ENV. VARIABLE | VALUE |
|--------------------| ------------------------------------------------------------|
|HOME | C:\Source\Repos\jira-automated-testing |
|CONFIG | C:\Program Files\Python310\Lib\site-packages\behavex\conf_behavex.cfg|
|OUTPUT | C:\Source\Repos\jira-automated-testing\output |
|TAGS | None |
|PARALLEL_SCHEME | scenario |
|PARALLEL_PROCESSES | 1 |
|TEMP | C:\Source\Repos\jira-automated-testing\output\temp |
|LOGS | C:\Source\Repos\jira-automated-testing\output\outputs\logs |
|LOGGING_LEVEL | INFO |
|--------------------| ------------------------------------------------------------|

There are no failing test scenarios to run.

C:\Source\Repos\jira-automated-testing>behavex -rf
|--------------------| ------------------------------------------------------------|
|ENV. VARIABLE | VALUE |
|--------------------| ------------------------------------------------------------|
|HOME | C:\Source\Repos\jira-automated-testing |
|CONFIG | C:\Program Files\Python310\Lib\site-packages\behavex\conf_behavex.cfg|
|OUTPUT | C:\Source\Repos\jira-automated-testing\output |
|TAGS | None |
|PARALLEL_SCHEME | scenario |
|PARALLEL_PROCESSES | 1 |
|TEMP | C:\Source\Repos\jira-automated-testing\output\temp |
|LOGS | C:\Source\Repos\jira-automated-testing\output\outputs\logs |
|LOGGING_LEVEL | INFO |
|--------------------| ------------------------------------------------------------|

The path "C:\Source\Repos\jira-automated-testing\features\Create" was not found.

'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\features\Create" was not found.
'

The path "C:\Source\Repos\jira-automated-testing\CHG\Different" was not found.

'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\CHG\Different" was not found.
'

The path "C:\Source\Repos\jira-automated-testing\Tabs" was not found.

'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\Tabs" was not found.
'

The path "C:\Source\Repos\jira-automated-testing\for" was not found.

'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\for" was not found.
'

The path "C:\Source\Repos\jira-automated-testing\different" was not found.

'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\different" was not found.
'

The path "C:\Source\Repos\jira-automated-testing\Change" was not found.

'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\Change" was not found.
'

The path "C:\Source\Repos\jira-automated-testing\Categories.feature" was not found.

'15:47:39 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\Categories.feature" was not found.
'
InvalidFilenameError: C:/Source/Repos/jira-automated-testing/features/Create
Exit code: 0

C:\Source\Repos\jira-automated-testing>behavex -rf
|--------------------| ------------------------------------------------------------|
|ENV. VARIABLE | VALUE |
|--------------------| ------------------------------------------------------------|
|HOME | C:\Source\Repos\jira-automated-testing |
|CONFIG | C:\Program Files\Python310\Lib\site-packages\behavex\conf_behavex.cfg|
|OUTPUT | C:\Source\Repos\jira-automated-testing\output |
|TAGS | None |
|PARALLEL_SCHEME | scenario |
|PARALLEL_PROCESSES | 1 |
|TEMP | C:\Source\Repos\jira-automated-testing\output\temp |
|LOGS | C:\Source\Repos\jira-automated-testing\output\outputs\logs |
|LOGGING_LEVEL | INFO |
|--------------------| ------------------------------------------------------------|

The path "C:\Source\Repos\jira-automated-testing"features\Create" was not found.

'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing"features\Create" was not found.
'

The path "C:\Source\Repos\jira-automated-testing\CHG\Different" was not found.

'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\CHG\Different" was not found.
'

The path "C:\Source\Repos\jira-automated-testing\Tabs" was not found.

'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\Tabs" was not found.
'

The path "C:\Source\Repos\jira-automated-testing\for" was not found.

'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\for" was not found.
'

The path "C:\Source\Repos\jira-automated-testing\different" was not found.

'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\different" was not found.
'

The path "C:\Source\Repos\jira-automated-testing\Change" was not found.

'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\Change" was not found.
'

The path "C:\Source\Repos\jira-automated-testing\Categories.feature" was not found.

'15:48:16 - bhx_parallel - INFO -
The path "C:\Source\Repos\jira-automated-testing\Categories.feature" was not found.
'
ConfigError: No steps directory in 'C:\Source\Repos\jira-automated-testing\"features\Create'
Exit code: 0

C:\Source\Repos\jira-automated-testing>

How to add arguments to main method while running python command?

I am running behave using python instead of command line. Behave implementation have a main method that I imported which arguments such as:
behave.__main__.main(args=['-t @Tag1'], '--no-skipped')
Is there a way I can add arguments to behaveX main module so I can use it from python runner?

Also I don't see -no--skipped argument in list of supported arguments.

"There might be more arguments that can be supported, it is just a matter of adapting the wrapper implementation to use these."

Is it possible to add this argument? If so, how?

Is there a possibility to organize our folder structure in our own way?

Currently my testcase folder structure is and this is working fine
features
test1
testcase1.feature
test2
testcase2.feature
environment.py
pages
loginpage.py
homepage.py

Now i have rearranged to the below folder structure but this is not working, please let me know if we have the option to rearrange folder structure as per developer wish

test
features
test1
testcase1.feature
test2
testcase2.feature
environment.py
pages
loginpage.py
homepage.py

Receiving Error: o module named behavex.__main__; 'behavex' is a package and cannot be directly executed

Describe the bug

After installing behavex I attempted to execute per the documented command (python -m behavex -t @example --parallel-processes 2 --parallel-scheme scenario) and received the error "No module named behavex.main; 'behavex' is a package and cannot be directly executed".

I attempted to run just the behavex command (python -m behavex) and received the same error. Please advise how I can correct this issue.

I verified the install of behavex was successful (see below):

**Note: all personal/company data has been removed


Microsoft Windows [Version 10.0.19042.1706]
(c) Microsoft Corporation. All rights reserved.

C:\Program Files\Python380\Scripts>python -m pip install behavex
Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: https://*****/artifactory/api/pypi/pypi-dev/simple
Collecting behavex
Downloading https://******/artifactory/api/pypi/pypi-dev/packages/packages/a7/eb/3b1655d5fcc45f68897f1ddf3b384f96217de4f9e7a344ee11cf9ba9b8cb/behavex-1.5.10.tar.gz (467 kB)

 |████████████████████████████████| 467 kB 3.3 MB/s

Preparing metadata (setup.py) ... done
Requirement already satisfied: behave==1.2.6 in c:\users*\appdata\roaming\python\python38\site-packages (from behavex) (1.2.6)
Requirement already satisfied: jinja2 in c:\users*
\appdata\roaming\python\python38\site-packages (from behavex) (3.0.3)
Collecting configobj
Downloading https://******/artifactory/api/pypi/pypi-dev/64/61/079eb60459c44929e684fa7d9e2fdca403f67d64dd9dbac27296be2e0fab/configobj-5.0.6.tar.gz (33 kB)
Preparing metadata (setup.py) ... done
Collecting htmlmin
Downloading https://******/artifactory/api/pypi/pypi-dev/b3/e7/fcd59e12169de19f0131ff2812077f964c6b960e7c09804d30a7bf2ab461/htmlmin-0.1.12.tar.gz (19 kB)
Preparing metadata (setup.py) ... done
Collecting csscompressor
Downloading https://*******/artifactory/api/pypi/pypi-dev/packages/packages/f1/2a/8c3ac3d8bc94e6de8d7ae270bb5bc437b210bb9d6d9e46630c98f4abd20c/csscompressor-0.9.5.tar.gz (237 kB)

 |████████████████████████████████| 237 kB 6.4 MB/s

Preparing metadata (setup.py) ... done

Requirement already satisfied: parse-type>=0.4.2 in c:\users*\appdata\roaming\python\python38\site-packages (from behave==1.2.6->behavex) (0.6.0)
Requirement already satisfied: six>=1.11 in c:\users*
\appdata\roaming\python\python38\site-packages (from behave==1.2.6->behavex) (1.16.0)
Requirement already satisfied: parse>=1.8.2 in c:\users*
\appdata\roaming\python\python38\site-packages (from behave==1.2.6->behavex) (1.19.0)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users*
**\appdata\roaming\python\python38\site-packages (from jinja2->behavex) (2.1.0)
Using legacy 'setup.py install' for behavex, since package 'wheel' is not installed.
Using legacy 'setup.py install' for configobj, since package 'wheel' is not installed.
Using legacy 'setup.py install' for csscompressor, since package 'wheel' is not installed.
Using legacy 'setup.py install' for htmlmin, since package 'wheel' is not installed.
Installing collected packages: htmlmin, csscompressor, configobj, behavex
Running setup.py install for htmlmin ... done
Running setup.py install for csscompressor ... done
Running setup.py install for configobj ... done
Running setup.py install for behavex ... done
Successfully installed behavex-1.5.10 configobj-5.0.6 csscompressor-0.9.5 htmlmin-0.1.12


To Reproduce
Steps to reproduce the behavior:

  1. Install behavex 1.5.10 (python -m pip install behavex)
  2. Open python project folder in Pycharm
  3. Open terminal in Pycharm and execute command (python -m behavex -t @example --parallel-processes 2 --parallel-scheme scenario)
  4. See error

Expected behavior
Expected behave feature files and associated code to be executed

Desktop (please complete the following information):

  • OS: Windows Version 10.0.19042.1706

Behavex command not working as per documentation

@hrcorval @balaji2711 @Remox129 @anibalinn
Behavex command not working as per documentation...pls. advise

tried all possibilities:

behavex -ip ./tests/wms/autotest/web/features -t InventorySyncVarianceConfiguration_ICN --color -o .
behavex -t InventorySyncVarianceConfiguration_ICN --color -o .
behavex -t @InventorySyncVarianceConfiguration_ICN --color -o .
behavex -t @InventorySyncVarianceConfiguration_ICN

image

Behavex dependency sometimes fails

Describe the bug
I have a docker image build that installs behavex and other dependencies using poetry. Occasionally behavex fails to install because the htmlmin dependency that behavex depends on will fail to install. This is an intermittent problem that I'm not sure how to workaround it.

I've put the poetry stacktrace at the end.

To Reproduce
Run poetry install when using Poetry to manage behavex as a dependency.

Expected behavior
Installation of behavex to succeed.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: docker.io/python:3.10.8-slim

Additional context

• Installing yarl (1.8.2)
  CalledProcessError
  Command '['/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--prefix', '/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10', '--no-deps', '/root/.cache/pypoetry/artifacts/45/0d/04/012fb55a6ef90f9e62582c231e7c4564e32686eb54811f499cfdccfce5/htmlmin-0.1.12.tar.gz']' returned non-zero exit status 2.
  at /usr/local/lib/python3.10/subprocess.py:526 in run
       522│             # We don't call process.wait() as .__exit__ does that for us.
       523│             raise
       524│         retcode = process.poll()
       525│         if check and retcode:
    →  526│             raise CalledProcessError(retcode, process.args,
       527│                                      output=stdout, stderr=stderr)
       528│     return CompletedProcess(process.args, retcode, stdout, stderr)
       529│ 
       530│ 
The following error occurred when trying to handle this error:
  EnvCommandError
  Command ['/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--prefix', '/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10', '--no-deps', '/root/.cache/pypoetry/artifacts/45/0d/04/012fb55a6ef90f9e62582c231e7c4564e32686eb54811f499cfdccfce5/htmlmin-0.1.12.tar.gz'] errored with the following return code 2, and output: 
  Processing /root/.cache/pypoetry/artifacts/45/0d/04/012fb55a6ef90f9e62582c231e7c4564e32686eb54811f499cfdccfce5/htmlmin-0.1.12.tar.gz
    Installing build dependencies: started
    Installing build dependencies: finished with status 'done'
    Getting requirements to build wheel: started
    Getting requirements to build wheel: finished with status 'done'
    Preparing metadata (pyproject.toml): started
    Preparing metadata (pyproject.toml): finished with status 'done'
  ERROR: Exception:
  Traceback (most recent call last):
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 160, in exc_logging_wrapper
      status = run_func(*args)
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/cli/req_command.py", line 247, in wrapper
      return func(self, options, args)
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/commands/install.py", line 419, in run
      requirement_set = resolver.resolve(
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 73, in resolve
      collected = self.factory.collect_root_requirements(root_reqs)
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 491, in collect_root_requirements
      req = self._make_requirement_from_install_req(
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 453, in _make_requirement_from_install_req
      cand = self._make_candidate_from_link(
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 206, in _make_candidate_from_link
      self._link_candidate_cache[link] = LinkCandidate(
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 297, in __init__
      super().__init__(
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 162, in __init__
      self.dist = self._prepare()
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 231, in _prepare
      dist = self._prepare_distribution()
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 308, in _prepare_distribution
      return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/operations/prepare.py", line 491, in prepare_linked_requirement
      return self._prepare_linked_requirement(req, parallel_builds)
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/operations/prepare.py", line 577, in _prepare_linked_requirement
      dist = _get_prepared_distribution(
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/operations/prepare.py", line 69, in _get_prepared_distribution
      abstract_dist.prepare_distribution_metadata(
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/distributions/sdist.py", line 61, in prepare_distribution_metadata
      self.req.prepare_metadata()
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/req/req_install.py", line 539, in prepare_metadata
      self.metadata_directory = generate_metadata(
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/operations/build/metadata.py", line 35, in generate_metadata
      distinfo_dir = backend.prepare_metadata_for_build_wheel(metadata_dir)
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_internal/utils/misc.py", line 722, in prepare_metadata_for_build_wheel
      return super().prepare_metadata_for_build_wheel(
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 186, in prepare_metadata_for_build_wheel
      return self._call_hook('prepare_metadata_for_build_wheel', {
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_impl.py", line 321, in _call_hook
      raise BackendUnavailable(data.get('traceback', ''))
  pip._vendor.pyproject_hooks._impl.BackendUnavailable: Traceback (most recent call last):
    File "/root/.cache/pypoetry/virtualenvs/my-project-FFP0Gh2h-py3.10/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend
      obj = import_module(mod_path)
    File "/usr/local/lib/python3.10/importlib/__init__.py", line 126, in import_module
      return _bootstrap._gcd_import(name[level:], package, level)
    File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
    File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
    File "<frozen importlib._bootstrap>", line 992, in _find_and_load_unlocked
    File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
    File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
    File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
    File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
    File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
    File "<frozen importlib._bootstrap_external>", line 883, in exec_module
    File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
    File "/tmp/pip-build-env-69a8sgxm/overlay/lib/python3.10/site-packages/setuptools/__init__.py", line 8, in <module>
      import _distutils_hack.override  # noqa: F401
  ModuleNotFoundError: No module named '_distutils_hack.override'
  
  
  at /usr/local/lib/python3.10/site-packages/poetry/utils/env.py:1476 in _run
      1472│                 output = subprocess.check_output(
      1473│                     command, stderr=subprocess.STDOUT, env=env, **kwargs
      1474│                 )
      1475│         except CalledProcessError as e:
    → 1476│             raise EnvCommandError(e, input=input_)
      1477│ 
      1478│         return decode(output)
      1479│ 
      1480│     def execute(self, bin: str, *args: str, **kwargs: Any) -> int:
The following error occurred when trying to handle this error:
  PoetryException
  Failed to install /root/.cache/pypoetry/artifacts/45/0d/04/012fb55a6ef90f9e62582c231e7c4564e32686eb54811f499cfdccfce5/htmlmin-0.1.12.tar.gz
  at /usr/local/lib/python3.10/site-packages/poetry/utils/pip.py:51 in pip_install
       47│ 
       48│     try:
       49│         return environment.run_pip(*args)
       50│     except EnvCommandError as e:
    →  51│         raise PoetryException(f"Failed to install {path.as_posix()}") from e
       52│ 

Question - html report

Hi,
is there any chance in the future to have the html report as a formatter to be used with basic behave? Or maybe I can use it already?

Thanks,
Luca

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.