Coder Social home page Coder Social logo

mikepenz / action-junit-report Goto Github PK

View Code? Open in Web Editor NEW
274.0 5.0 110.0 8.73 MB

Reports junit test results as GitHub Pull Request Check

Home Page: https://blog.mikepenz.dev

License: Apache License 2.0

TypeScript 88.02% JavaScript 0.29% Python 0.36% Java 9.84% Kotlin 1.45% Shell 0.03%
github-actions actions hacktoberfest junit test test-automation workflow automation ci cd

action-junit-report's Introduction

:octocat:

action-junit-report

... reports JUnit test results as GitHub pull request check.



What's included ๐Ÿš€ โ€ข Setup ๐Ÿ› ๏ธ โ€ข Sample ๐Ÿ–ฅ๏ธ โ€ข Contribute ๐Ÿงฌ โ€ข License ๐Ÿ““


What's included ๐Ÿš€

  • Flexible JUnit parser with wide support
  • Supports nested test suites
  • Blazingly fast execution
  • Lighweight
  • Rich build log output

This action processes JUnit XML test reports on pull requests and shows the result as a PR check with summary and annotations.

Based on action for Surefire Reports by ScaCap

Setup

Configure the workflow

name: build
on:
  pull_request:

jobs:
  build:
    name: Build and Run Tests
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v4
      - name: Build and Run Tests
        run: # execute your tests generating test results
      - name: Publish Test Report
        uses: mikepenz/action-junit-report@v4
        if: success() || failure() # always run even if the previous step fails
        with:
          report_paths: '**/build/test-results/test/TEST-*.xml'

Inputs

Input Description
report_paths Optional. Glob expression to junit report paths. Defaults to: **/junit-reports/TEST-*.xml.
token Optional. GitHub token for creating a check run. Set to ${{ github.token }} by default.
test_files_prefix Optional. Prepends the provided prefix to test file paths within the report when annotating on GitHub.
exclude_sources Optional. Provide , seperated array of folders to ignore for source lookup. Defaults to: /build/,/__pycache__/
check_name Optional. Check name to use when creating a check run. The default is JUnit Test Report.
suite_regex Optional. Regular expression for the named test suites. E.g. Test*
commit Optional. The commit SHA to update the status. This is useful when you run it with workflow_run.
fail_on_failure Optional. Fail the build in case of a test failure.
require_tests Optional. Fail if no test are found.
require_passed_tests Optional. Fail if no passed test are found. (This is stricter than require_tests, which accepts skipped tests).
include_passed Optional. By default the action will skip passed items for the annotations. Enable this flag to include them.
check_retries Optional. If a testcase is retried, ignore the original failure.
check_title_template Optional. Template to configure the title format. Placeholders: {{FILE_NAME}}, {{SUITE_NAME}}, {{TEST_NAME}}.
summary Optional. Additional text to summary output
check_annotations Optional. Defines if the checks will include annotations. If disabled skips all annotations for the check. (This does not affect annotate_only, which uses no checks).
update_check Optional. Uses an alternative API to update checks, use for cases with more than 50 annotations. Default: false.
annotate_only Optional. Will only annotate the results on the files, won't create a check run. Defaults to false.
transformers Optional. Array of Transformers offering the ability to adjust the fileName. Defaults to: [{"searchValue":"::","replaceValue":"/"}]
job_summary Optional. Enables the publishing of the job summary for the results. Defaults to true. May be required to disable Enterprise Server
detailed_summary Optional. Include table with all test results in the summary. Defaults to false.
annotate_notice Optional. Annotate passed test results along with warning/failed ones. Defaults to false. (Changed in v3.5.0)
follow_symlink Optional. Enables to follow symlinks when searching test files via the globber. Defaults to false.
job_name Optional. Specify the name of a check to update
annotations_limit Optional. Specify the limit for annotations. This will also interrupt parsing all test-suites if the limit is reached. Defaults to: No Limit.
Common report_paths

  • Surefire: **/target/surefire-reports/TEST-*.xml
  • sbt: **/target/test-reports/*.xml

Increase Node Heap Memory

If you encounter an out-of-memory from Node, such as

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

you can increase the memory allocation by setting an environment variable

- name: Publish Test Report
  uses: mikepenz/action-junit-report@v4
  env:
    NODE_OPTIONS: "--max_old_space_size=4096"
  if: success() || failure() # always run even if the previous step fails
  with:
    report_paths: '**/build/test-results/test/TEST-*.xml'

Action outputs

After action execution it will return the test counts as output.

# ${{steps.{CHANGELOG_STEP_ID}.outputs.total}}

A full set list of possible output values for this action.

Output Description
outputs.total The total number of test cases covered by this test-step.
outputs.passed The number of passed test cases.
outputs.skipped The number of skipped test cases.
outputs.failed The number of failed test cases.

PR run permissions

The action requires write permission on the checks. If the GA token is read-only (this is a repository configuration) please enable write permission via:

permissions:
  checks: write

Additionally for security reasons, the github token used for pull_request workflows is marked as read-only. If you want to post checks to a PR from an external repository, you will need to use a separate workflow which has a read/write token, or use a PAT with elevated permissions.

Example

name: build
on:
  pull_request:

jobs:
  build:
    name: Build and Run Tests
    runs-on: ubuntu-latest
    steps:
      - name: Checkout Code
        uses: actions/checkout@v3
      - name: Build and Run Tests
        run: # execute your tests generating test results
      - name: Upload Test Report
        uses: actions/upload-artifact@v3
        if: always() # always run even if the previous step fails
        with:
          name: junit-test-results
          path: '**/build/test-results/test/TEST-*.xml'
          retention-days: 1

---
name: report
on:
  workflow_run:
    workflows: [build]
    types: [completed]
    
permissions:
  checks: write

jobs:
  checks:
    runs-on: ubuntu-latest
    steps:
      - name: Download Test Report
        uses: dawidd6/action-download-artifact@v2
        with:
          name: junit-test-results
          workflow: ${{ github.event.workflow.id }}
          run_id: ${{ github.event.workflow_run.id }}
      - name: Publish Test Report
        uses: mikepenz/action-junit-report@v3
        with:
          commit: ${{github.event.workflow_run.head_sha}}
          report_paths: '**/build/test-results/test/TEST-*.xml'

This will securely post the check results from the privileged workflow onto the PR's checks report.

Sample ๐Ÿ–ฅ๏ธ

Contribute ๐Ÿงฌ

# Install the dependencies  
$ npm install

# Verify lint is happy
$ npm run lint -- --fix

# Build the typescript and package it for distribution
$ npm run build && npm run package

# Run the tests, use to debug, and test it out
$ npm test

Credits

Original idea and GitHub Actions by: https://github.com/ScaCap/action-surefire-report

Other actions

License

Copyright (C) 2023 Mike Penz

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

action-junit-report's People

Contributors

antonydenyer avatar cad97 avatar chandlerferry avatar chrisk824 avatar comir avatar dependabot[bot] avatar johnsonrw82 avatar kesselborn avatar kjeldahl avatar laughedelic avatar mas0061 avatar mijothy avatar mikepenz avatar mumrah avatar oseasmoran73 avatar raalsky avatar stackptr avatar theevilroot avatar timyates avatar ybiquitous avatar yeikel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

action-junit-report's Issues

New Feature: Published Report Includes Passed Tests

First off, wanted to say that I love this action, and I really appreciate your work!

I noticed in my builds that the action was taking in a 'secret' input called include_passed in the action as false, so I decided to set it to 'true' and see what happened.

My Test Report looked like this:

image
image

I think it's awesome that it lists all the tests! The only problem is that it includes the successful tests in the number of failed tests at the top (aka include_passed).

I was wondering if a new feature could be created that would alter this input's effect on the test report, removing them from the failed count and maybe changing their natural grey check notice button to be a green checkmark? That way if the include_passed input was set to false, it would only show the failed tests, and if set to true it would show the passed, skipped, and failed ones? This would make the report much more comprehensive (and become my favorite thing ever.)

Bad regex causing failure

Thanks Mike Penz for fast response on my last ticket.

It seems that some new bug got introduced, which seems to me to be from the fork?

https://github.com/mikepenz/action-junit-report/blob/main/dist/index.js#L160

As you see here, this regex is taking the string for the name of the workflow, and I believe the + sign in c++ causes it to interfere with the regex. I'll try changing to cxx as a temporary workaround. An example of this failure you can see here:

https://github.com/csound/csound/actions/runs/492358430

ADDED: The test case containing "c++" isn't failing anymore and the action runs fine now, so it's only a problem for failing test cases.

`annotate_only` not working

I was trying to use annotate_only input which was released recently in v3.
But, I am getting the following error.

Screenshot 2022-03-19 at 11 06 40 PM

fail_on_failure = false marks the step as failed when a test fails

I have a job that runs unit tests. Some of these tests fail. I have the following step in my job:

- name: Publish Test Report
  uses: mikepenz/action-junit-report@v3
  if: always()
  with:
    report_paths: "**/unit-tests.xml"
    fail_on_failure: false

This step fails when any test in unit-tests.xml failed, although fail_on_failure is set to false.

Can the Junit report separate the stack trace onto multiple lines?

I have a lot of repos in my org using this action junit report tool and they all print the stack trace on one line, in the format of an array of strings. I thought updating to v2 would solve this, but I see the same output. Is there any way yall could make a version or option to separate the stack traces onto multiple lines, or does this functionality exist already? We were using v1, and I have tried using v2 in one of our repos with no change.

Display result summary when using `workflow_dispatch` or `schedule`

Hi,

Is there a way to get results displayed in the summary when tests are triggered using workflow_dispatch or schedule ?

When the trigger is a change in code, the results are shown in the run summary.

Using workflow_dispatch or schedule, the results are still visible in the action logs, but it would be super helpful if they could be brought up to the summary.

  โ„น๏ธ 137 tests run, 2 skipped, 3 failed.
  โ„น๏ธ Posting with conclusion 'failure' to refs/heads/master (sha: 31719493de9c531f39891431db477a3f8393989

Thanks,

Feature request: Support nested suites

Issue

Junit 5 supports nested tests so it possibly can result in nested <testsuite> tags in xml output:

<testsuites>
    <testsuite tests="4" failures="0" time="150.0" name="All tests">
        <testsuite tests="2" failures="0" time="30.0" name="TestA">
            <properties></properties>
            <testcase classname="" name="A" time="10.0" cluster_instance="packet-1">
                <failure message="failure"></failure>
            </testcase>
            <testcase classname="" name="B" time="20.0" cluster_instance="packet-1"></testcase>
        </testsuite>
        <testsuite tests="2" failures="1" time="70.0" name="TestB">
            <properties></properties>
            <testcase classname="" name="A" time="30.0" cluster_instance="packet-1"></testcase>
            <testcase classname="" name="B" time="40.0" cluster_instance="packet-1">
                <failure message="failure"></failure>
            </testcase>
        </testsuite>
        <testcase classname="" name="A" time="50.0" cluster_instance="packet-1">
            <failure message="failure"></failure>
        </testcase>
    </testsuite>
</testsuites>

Currently action-junit-report doesn't support nested suites and looks up only for first level depth <testcase> tags in the root test suites.

Solution

  1. Add recursive parsing for the nested test suites.
  2. Add suite_regex parameter, so nested test cases would be reported not as A, B, but as TestA/A, TestA/B.

Different line number representation

Hi @mikepenz,

We are using your Action to attach test results to our repositories e.g., https://github.com/ARM-software/VHT-GetStarted/blob/main/.github/workflows/basic.yml.

It looks like there are different ways to note file and line number information. In above example our test execution results in the following result:

<?xml version="1.0" ?>
<testsuites disabled="0" errors="0" failures="1" tests="4" time="0.0">
	<testsuite disabled="0" errors="0" failures="1" name="Cloud-CI basic tests" skipped="0" tests="4" time="0">
		<testcase name="test_my_sum_pos" file="main.c" line="44"/>
		<testcase name="test_my_sum_neg" file="main.c" line="45"/>
		<testcase name="test_my_sum_fail" file="main.c" line="38">
			<failure type="failure" message="Expected 2 Was 0"/>
		</testcase>
		<testcase name="test_my_sum_zero" file="main.c" line="47"/>
	</testsuite>
</testsuites>

Each test case has dedicated attributes for file and line. But the failure of this result is attached to line 1 of main.c:
image

Inspecting your implementation let me think you are expecting the file/line information in a different way i.e., file="main.c:38". Is that understanding correct? It looks like the JUnit XML format is interpreted differently. Is it possible to support both formats?

Thanks,
Jonatan

Not getting line numbers from mocha tests via web-test-runner

Hello! Thank you for providing this action.

I wrote a junit reporter for web-test-runner which outputs xml like this:

<?xml version="1.0" encoding="UTF-8"?>
<testsuites>
  <testsuite name="Chromium_playwright_/elements/pfe-accordion/test/pfe-accordion.spec.ts" id="0" tests="12" skipped="0" errors="2" failures="2" time="0.006">
    <properties>
      <property name="test.fileName" value="/Users/bennyp/Developer/patternfly-elements/elements/pfe-accordion/test/pfe-accordion.spec.ts"/>
      <property name="browser.name" value="Chromium"/>
      <property name="browser.launcher" value="playwright"/>
    </properties>
    <testcase name="expands the middle panel" time="0.001" classname="&lt;pfe-accordion&gt; keyboard accessibility when focus is on the middle header Space"/>
    <testcase name="expands the middle panel" time="0" classname="&lt;pfe-accordion&gt; keyboard accessibility when focus is on the middle header Enter"/>
    <testcase name="moves focus to the link in middle panel" time="0.003" classname="&lt;pfe-accordion&gt; keyboard accessibility when focus is on the middle header Tab">
      <failure message="expected &lt;pfe-accordion-header aria-controls=&quot;&quot; pfelement=&quot;&quot; class=&quot;PFElement&quot; on=&quot;null&quot; id=&quot;pfe-accordion-8vcn1owsc&quot; has_body=&quot;&quot; has-body=&quot;&quot; heading-text=&quot;Incididunt in Lorem voluptate eiusmod dolor?&quot; heading-tag=&quot;h3&quot; disclosure=&quot;false&quot;&gt;
      &lt;h3&gt;Incididunt in Lorem voluptate eiusmod dolor?&lt;/h3&gt;
    &lt;/pfe-accordion-header&gt; to equal &lt;a href=&quot;#&quot;&gt;Lorem ipsum dolor sit amet&lt;/a&gt;" type=""><![CDATA[  at o.<anonymous> (elements/pfe-accordion/test/pfe-accordion.spec.ts:663:44)]]></failure>
    </testcase>
    <testcase name="moves focus to the link in first panel" time="0" classname="&lt;pfe-accordion&gt; keyboard accessibility when focus is on the middle header Shift+Tab">
      <failure message="expected &lt;pfe-accordion-header id=&quot;header1&quot; aria-controls=&quot;panel1&quot; pfelement=&quot;&quot; class=&quot;PFElement&quot; on=&quot;null&quot; has_body=&quot;&quot; has-body=&quot;&quot; heading-text=&quot;Consetetur sadipscing elitr?&quot; heading-tag=&quot;h3&quot; disclosure=&quot;false&quot;&gt;
      &lt;h3&gt;Consetetur sadipscing elitr?&lt;/h3&gt;
    &lt;/pfe-accordion-header&gt; to equal &lt;a href=&quot;#&quot;&gt;Lorem ipsum dolor sit amet&lt;/a&gt;" type=""><![CDATA[  at o.<anonymous> (elements/pfe-accordion/test/pfe-accordion.spec.ts:674:44)]]></failure>
    </testcase>
    <testcase name="moves focus to the last header" time="0" classname="&lt;pfe-accordion&gt; keyboard accessibility when focus is on the middle header ArrowDown"/>
    <testcase name="moves focus to the first header" time="0" classname="&lt;pfe-accordion&gt; keyboard accessibility when focus is on the middle header ArrowUp"/>
    <testcase name="moves focus to the first header" time="0" classname="&lt;pfe-accordion&gt; keyboard accessibility when focus is on the middle header Home"/>
    <testcase name="moves focus to the last header" time="0" classname="&lt;pfe-accordion&gt; keyboard accessibility when focus is on the middle header End"/>
    <testcase name="collapses the second panel" time="0.001" classname="&lt;pfe-accordion&gt; keyboard accessibility when focus is on the middle header and the second panel is expanded Space"/>
    <testcase name="collapses the second panel" time="0" classname="&lt;pfe-accordion&gt; keyboard accessibility when focus is on the middle header and the second panel is expanded Enter"/>
    <testcase name="moves focus to the link in the second panel" time="0" classname="&lt;pfe-accordion&gt; keyboard accessibility when focus is on the middle header and the second panel is expanded Tab"/>
    <testcase name="moves focus to the first header" time="0.001" classname="&lt;pfe-accordion&gt; keyboard accessibility when focus is on the middle header and the second panel is expanded Shift+Tab"/>
  </testsuite>
</testsuites>

This XML contains line and file data in the CDATA comment of the failure element. However, although the github action does create annotations (see https://github.com/patternfly/patternfly-elements/runs/4599306749), it does not add file and line data to them.

Is this a problem with the xml I'm writing with @web/test-runner-junit-reporter, or can we improve the action's parser?

Thanks

Regression after 3.3.3?

Hi there,

I am using this action with reports produced by container-structure-test. I have a workflow running daily which stopped working about the same time as 3.4.0 was released (early september):

image

Here is the relevant part of my workflow:

      - name: Test image
        run: |
          curl -sSLO https://storage.googleapis.com/container-structure-test/v1.11.0/container-structure-test-linux-${{ matrix.architecture.docker }}
          mv container-structure-test-linux-${{ matrix.architecture.docker }} container-structure-test
          chmod +x container-structure-test

          ./container-structure-test test \
            --image ${{ github.repository }}:test \
            --config tests/container-structure-test.yaml \
            --output junit \
            --test-report tests/container-structure-test.xml

      - name: Publish test report
        uses: mikepenz/action-junit-report@v3
        if: always()
        with:
          check_name: Container structure test report (${{ matrix.architecture.docker }})
          report_paths: tests/container-structure-test.xml

Pinning to version 3.3.3, it works again therefore I suspect a regression.

I can produce the report locally, here it is:

<testsuites failures="0" tests="8" time="1967986371">
    <testsuite>
        <testcase name="Command Test: apt-get upgrade" time="999638317"></testcase>
        <testcase name="Command Test: Commands in $PATH" time="968348054"></testcase>
        <testcase name="File Existence Test: /home/app/investapp" time="0"></testcase>
        <testcase name="File Existence Test: /home/app/investapp/db" time="0"></testcase>
        <testcase name="File Existence Test: /home/app/investapp/log" time="0"></testcase>
        <testcase name="File Existence Test: /home/app/investapp/tmp" time="0"></testcase>
        <testcase name="File Existence Test: /home/app/investapp/permanent_directory" time="0"></testcase>
        <testcase name="Metadata Test" time="0"></testcase>
    </testsuite>
</testsuites>

Check <testsuite> for `file` attribute

The popular "mocha-junit-report" project attaches the file attribute to the <testsuite> tag, because Circle CI looks for it there. Test suites are not split over files, so it's waste of bytes to repeat the file attribute for every <testcase> within it.

https://github.com/michaelleeallen/mocha-junit-reporter/blob/master/index.js#L292

If there is file attribute on the current <testcase> use that, then next try the file attribute of the parent <testsuite>. Finally, fallback to trying to parse classname into a file.

Here's the PR where mocha-junit-reporter proposed adding it five years ago:
michaelleeallen/mocha-junit-reporter#41

Not sure about how this action works

Hello,

I setup this action, and it "seems" to work :

Run mikepenz/action-junit-report@v2
  with:
    report_paths: ./coverage/junit.xml
    require_tests: true
    token: ***
    check_name: JUnit Test Report
    fail_on_failure: false
๐Ÿ“˜ Reading input values
๐Ÿ“ฆ Process test results
  test UnifiAuth fail test for junit:1 | Error: expect(received).toBeFalsy()
  test UnifiAuth fail test for junit:1 | Error: expect(received).toBeFalsy()
  โ„น๏ธ 446 tests run, 0 skipped, 2 failed.
  โ„น๏ธ Posting status 'completed' with conclusion 'failure' to https://github.com/thib3113/unifi-client/pull/141 (sha: 91110da18e541c228c15873c5e43438c877cacc0)
๐Ÿš€ Publish results

image

But .... I didn't know what this action do ?

When checking the related PR :

  • no comments on files
  • no comments in PR

( I only notice a new actions in github actions )

Merge reports of multiple checks

Running many checks with different check_name will result in the sidebar to contain all those entries.

It would be great if we could merge together the report so only one entry will be shown

Suggested by @iBotPeaches in this ticket:
#194

How to handle multiple jobs with reports?

So I like running our suite on multiple php version and various modules, but in research it appears the JUnit Test Report job only takes the last submitted one, so it appears to squash previous same job runs.

Maybe intended, but do you have any ideas on a path for this?


(php72 - module a)

  โ„น๏ธ 894 tests run, 28 skipped, 1 failed.
  โ„น๏ธ Posting status 'completed' with conclusion 'failure' to refs/heads/foo (sha: fa38e6da7e82f4e19e2f0c446bae36703592d653)

(php72 - module b)

  โ„น๏ธ 1678 tests run, 51 skipped, 0 failed.
  โ„น๏ธ Posting status 'completed' with conclusion 'success' to refs/heads/foo (sha: fa38e6da7e82f4e19e2f0c446bae36703592d653)

The JUnit Test Report that shows up on sidebar is green (despite 1 job failing) and the total count is - 1678 tests run, 51 skipped, 0 failed. which matches the last run.

Always include passed tests in summary

I don't want all the passed tests as annotations noise and it only adds one more column so could the new summary report please always include passed tests?

Pattern error when processing test results

I'm using RSpec with RSpec Junit Formatter and KnapsackPro for parallelization of my tests, which I'm using on our Node frontend tests successfully, but it breaks with the backend RSpec tests when there's any test failures (the parallel containers that succeed create and display the report as expected) with the following message in the "Process test results" step:

Error: Invalid pattern '**/./spec/models/group_spec.rb.*'. Relative pathing '.' and '..' is not allowed.

I have no special configuration or logic, definitely none that involves pathing like that, and as noted the step that actually runs RSpec runs as intended both when there are failures and when there aren't.

test report displays red check when using include_passed

Hello Again!

Love the update where my passed tests are not counted as failures! I also know that the annotations on the passed ones need to be grey.

I was wondering if it's possible to change the button of the test report. Upon a successful job, the button should be green. Ideally with a test report that has no failures, the button should be green too.

Currently, even if all the tests are passed, it still turns the button red:
image

Could that button be upgraded to be green upon the lack of failed tests?

Thanks for all the work you do!

workflow_run event does not show the annotated test results

I see that you have an commit input which we should point to a SHA in case of using workflow_run.
The workflow_run event by default run on the last default branch commit.

Why is that needed? How can I make it work?

PS: The actions works fine on PR and push events.

Feature Request: Fail job on failed tests

It would be a nice if it was possible to allow multiple test runners to continue on error and set the responsibility of failed workflow entirely on the test reports. For example here https://github.com/csound/csound/runs/1712733912?check_suite_focus=true there are many errors, but still there's success at the end of the action.

This seems to be already an optional parameter in action-surefire-report https://github.com/ScaCap/action-surefire-report/blob/f42e5117805a19999df3935c939721a8dc25c502/action.js#L54-L57


@hlolli
original: mikepenz/action-junit-report-legacy#42

Github log console shows "error" even though it did not failed

Hello Mike ๐Ÿ‘‹, thank you for all your contributions! ๐Ÿ™Œ

I just integrated the github action, it's working fine when tests are failing, however when all the tests pass the Github log console shows "error":
image

Also it is not possible to "expand" the generated info:
image

When the build is failing I can expand the information correctly.

This bug doesn't have any impact on the success of the build:
image

Testsuites are not displayed

Hi :)

I'm looking for a plugin for parsing junit results on github actions and I noticed the testsuites and testsuite are not displayed.

junit.xml ->

<testsuites errors="0" disabled="0" failures="1" tests="1" time="" name="">
	<testsuite name="apps/v1//Deployment/example" disabled="0" errors="0" failures="1" hostname="" id="1" skipped="" time="" timestamp="0001-01-01 00:00:00 +0000 UTC">
		<properties>
			<property name="ID" value="apps/v1//Deployment/example">
			</property>
		</properties>
		<testcase classname="C-0061" status="failed" name="Pods in default namespace" time="">
			<failure message="metadata.namespace=YOUR_VALUE" type="">
				More deatiles: https://hub.armo.cloud/docs/c-0061
			</failure>
		</testcase>
		<testcase classname="C-0044" status="passed" name="Container hostPort" time="">
		</testcase>
	</testsuite>
</testsuites>

Github action:

# This is a basic workflow to help you get started with Actions

name: CI
on:
  push:
    branches: [ main ]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
     
      - name: JUnit Report Action
        # uses: mikepenz/action-junit-report@127c778ac944abc0f48a5103964304bab7eb208b
        uses: mikepenz/[email protected]
        with:
          report_paths: "junit.xml"

Result:

image

The testsuite information is totally missing, is this behavior intended or a bug?

Action is slow - with `include_passed: true`

I'm using this action for publishing JUnit report generated by 'ctest'.
JUnit report with 176 tests, from them 84 skipped; and it takes about 35 minutes, mostly in "Process test results" step.
Is there a way to improve action performance?
Thanks, Vitaly

JUnit report sent to wrong build

Dear maintainer,

Thanks so much for creating this excellent plugin and sharing it for others to use!

I have multiple workflows defined, and it seems that the check is added to the first workflow, rather than the one that actually ran the test suite. Is there some way to associate the report with the correct workflow definition? Ideally I'd like the report to be below the test suite (e.g. job that runs tests, publishes report, should have the report appended as a check below it or nearby)

Warning during publish step

warning during publish step: "(node:678) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead."

I don't find a call to Buffer() in this code base, so I presume it's coming from a dependency.

Feature request: Transform class name before resolving class name into file name

Dear maintainer,

This is a feature request. I'm perl engineer using this action, along https://metacpan.org/pod/TAP::Harness::JUnit .

In short, my test runner generates strange report xml file (because perl is not JVM language):

  • Test file t/Foo/Bar/Qux.t is represendted as class t.Foo.Bar.Qux_t in JUnit XML
  • action-junit-report tries to find t/Foo/Bar/Qux_t to annotate error message
  • the action cannot find test file, so we cannot annotate error message in correct position

Thus, I suggest following feature to resolve test file accurately:

  • Class name transformer , represented in regex (or some similar way), that intercepts mapping from class name to file name

Thanks for great action, working perfectly other language I'm developing in.

check title name {{ FILE_NAME }} is not working

Thanks a lot for this plugin, we the csound developers have been using it for almost two years now ๐Ÿ™

We are now extending our tests and are experiencing an issue.
the file name macro appears empty here https://github.com/csound/csound/actions/runs/3169515183
but I did include it here as it was listed in the manual https://github.com/csound/csound/blob/ead287fdedb7334c3e16c1b2bd9e028b9792295e/.github/workflows/csound_wasm.yml#L113-L114
perhaps you can see if I've configured it incorrectly, but the files are picked up from the filepath glob, and I'd assume the filenames are known?

Thanks again!

require_tests=false sending a failure when no tests are found

Hello, require_tests is not working for me. It's still sending a failure when I don't provide a test file (scanner.xml) (which is the case when no errors are found)

Here's my config:

            - name: Print Security Scanner Results
              uses: mikepenz/action-junit-report@v2
              with:
                  report_paths: '**/scanner.xml'
                  github_token: ${{ secrets.GITHUB_TOKEN }}
                  check_name: PMD Security Report Scanner
                  fail_on_failure: true
                  require_tests: false

Action output:
๐Ÿ“ฆ Process test results
โ„น๏ธ No test results found!
โ„น๏ธ Posting status 'completed' with conclusion 'failure' to ...

Test report publishing fails following v3.3.0 release

Hi, I have been using your test report publishing scripts in several projects. Today you released a new version and since then the publishing fails.

image

It seems to have something to do with the summary input, but this input is described as optional in the documentation.

Annotation title is empty when check_title_template is not passed

I found that in the annotation array, title is empty string when check_title_template is not passed as input.
Screenshot 2021-12-31 at 23 07 55

After some digging, I believe action input is of type string
https://github.com/actions/toolkit/blob/daf8bb00606d37ee2431d9b1596b88513dcf9c59/packages/core/src/core.ts#L119-L140

And checkTitleTemplate should be '' instead of undefined

let title = ''
if (checkTitleTemplate !== undefined) {
// ensure to not duplicate the test_name if file_name is equal
const fileName =
pos.fileName !== testcase._attributes.name ? pos.fileName : ''
title = checkTitleTemplate
.replace('${{FILE_NAME}}', fileName)
.replace('${{SUITE_NAME}}', suiteName ?? '')
.replace('${{TEST_NAME}}', testcase._attributes.name)
} else if (pos.fileName !== testcase._attributes.name) {
title = suiteName
? `${pos.fileName}.${suiteName}/${testcase._attributes.name}`
: `${pos.fileName}.${testcase._attributes.name}`
} else {
title = suiteName
? `${suiteName}/${testcase._attributes.name}`
: `${testcase._attributes.name}`
}

Listing succeeded tests

Right now, seems that this action is listing tests with errors, however, as a build user, I want to be able to review which tests were effectively analyzed so that it would be great if tests that ended with success will be also printed but with green indicator ;) It would be nice to have such an input parameter that will control that. Still, the way in which the action is working actually could remain as default.

I've got only:

image

Cannot capture test reports from a folder in the root?

Overview

I have written a GitHub action for running unit tests with Kscript and I was trying to get the reports to show up using this action. There are some limitations to running kscript unit tests in CI which essentially requires generating a temporary project in the root folder under a hidden folder .kscript/. So when I tried to use the following glob, **/.kscript/*/build/test-results/test/TEST-*.xml, it doesn't seem to capture the tests which I thought was correct after testing it on an online glob tool. Below is a screenshot of where the files would live under the build folder and here is the actual run of the workflow with the path file:///root/.kscript/kscript_tmp_project__star-wars-char-enum.kts_1621146863830/build/...

Screen Shot 2021-05-15 at 23 45 59

I was curious is this test report not being found because of how the glob tool was being used in Junit report action or is there just a limitation to capturing files with a glob that live in the root folder?

Documentation: GITHUB_TOKEN

It would be awesome if you could describe in the README what permissions the input GITHUB_TOKEN. Eg. does it need full repo access?
<3

Failing to report results when output contains non ascii characters

Hello there. Thanks for this action, is great.

I noticed that my tests failed to be published when their report contained emojis. The error I got was the following:

  Error: โŒ Failed to create checks using the provided token. (HttpError: Validation Failed: {"resource":"CheckRun","code":"invalid","field":"annotations"})
  Warning: โš ๏ธ This usually indicates insufficient permissions. More details: https://github.com/mikepenz/action-junit-report/issues/23

I'm not sure it is a problem on this action or on github results, though.

Thanks.

Action can not report test results due to 'Bad Credentials'

Hi,

we would like to use your action in our GH Enterprise instance. Unfortunately, we get errors for the action:

##[error]Failed to create checks using the provided token. (HttpError: Bad credentials)
##[warning]This usually indicates insufficient permissions. More details: #32

image

I am aware of this discussion in the archived repo: mikepenz/action-junit-report-legacy#32
Do we have the same issue here with a read-only token because we are building a PR? Using pull_request_target is not very attractive to us...

How is the action then intended to be used? I have to admit, that I am a bit confused. ๐Ÿค”
How do I have to set up the repo and permission that it would work?

Thank you for your help and the action. :)

Test report fails to load if there are annotations

Hello,
The JUnit Test Report fails to load if there are annotations on the test results.

The annotations appear on the job summary page ok, but when I try to open the test report, I get an "Oops" page from GitHub with a 500 response code.
Annotations on passed and failed tests cause this error. If there are no annotations, meaning all tests pass and annotate_notice is set to false, I can open the test report.

I believe this was working last week.

Is there a way to disable annotations on failed tests to unblock us so that we can view the test report?

Test report publishing step errors on 3.3.0

The crash mostly appears in the logs after the "Publish results" step is skipped, but seems like it happens asynchronously (e.g. on https://github.com/kolmafia/kolmafia/runs/7930092463 it appears after "Retrieved 1 reports to process.").

HttpError: Resource not accessible by integration
    at /home/runner/work/_actions/mikepenz/action-junit-report/v3/webpack:/action-junit-report/node_modules/@octokit/request/dist-node/index.js:86:1
    at processTicksAndRejections (node:internal/process/task_queues:96:5)

Here is a failing run: https://github.com/kolmafia/kolmafia/runs/7930011115

It worked on 3.3.0, at least some of the time: https://github.com/kolmafia/kolmafia/actions/runs/2885651235. It would sometimes not create the report on 3.2.0, but it wouldn't crash.

failed tests consist of the test name 2 times

Instead of <suite-name>.<test-name> the UI shows <test-name>.<test-name>.
See test run here:
https://github.com/cppfw/tst/runs/2314883648

The test report is:

<?xml version="1.0" encoding="UTF-8"?>
<testsuites>
	<testsuite name='factorial' tests='22' disabled='10' failures='4' errors='2'>
		<testcase name='disabled_fixture_test' status='disabled'/>
		<testcase name='disabled_param_fixture_test[0]' status='disabled'/>
		<testcase name='disabled_param_fixture_test[1]' status='disabled'/>
		<testcase name='disabled_param_fixture_test[2]' status='disabled'/>
		<testcase name='disabled_param_fixture_test[3]' status='disabled'/>
		<testcase name='disabled_param_test[0]' status='disabled'/>
		<testcase name='disabled_param_test[1]' status='disabled'/>
		<testcase name='disabled_param_test[2]' status='disabled'/>
		<testcase name='disabled_param_test[3]' status='disabled'/>
		<testcase name='disabled_test' status='disabled'/>
		<testcase name='factorial_of_value_from_fixture' status='failed'>
			<failure message='/home/ivan/prj/tst/tests/failed/main.cpp:58: error: check_eq(3628800, 3628801)'/>
		</testcase>
		<testcase name='factorial_of_value_from_fixture[0]' status='failed'>
			<failure message='/home/ivan/prj/tst/tests/failed/main.cpp:97: error: condition was false'/>
		</testcase>
		<testcase name='factorial_of_value_from_fixture[1]' status='passed'/>
		<testcase name='factorial_of_value_from_fixture[2]' status='passed'/>
		<testcase name='factorial_of_value_from_fixture[3]' status='passed'/>
		<testcase name='positive_arguments_must_produce_expected_result' status='errored'>
			<error message='uncaught std::exception: thrown by test'/>
		</testcase>
		<testcase name='positive_arguments_must_produce_expected_result[0]' status='passed'/>
		<testcase name='positive_arguments_must_produce_expected_result[1]' status='passed'/>
		<testcase name='positive_arguments_must_produce_expected_result[2]' status='failed'>
			<failure message='/home/ivan/prj/tst/tests/failed/main.cpp:73: error: condition was false'/>
		</testcase>
		<testcase name='positive_arguments_must_produce_expected_result[3]' status='passed'/>
		<testcase name='test_which_fails_check_eq_with_custom_message' status='failed'>
			<failure message='/home/ivan/prj/tst/tests/failed/main.cpp:49: error: check_eq(6, 7): hello world!'/>
		</testcase>
		<testcase name='test_which_throws_unknown_exception' status='errored'>
			<error message='uncaught unknown exception'/>
		</testcase>
	</testsuite>
</testsuites>

Source file autodiscover and Python bytecode

Hi, I was checking out this action and decided to give it a try with a simple python package, but I noticed that the annotations are being done on the wrong file. More specifically it's taking *.pyc files instead of the original source file.

Is there any way to ensure it doesn't look inside the __pycache__ directories or something similar?

I guess one workaround would be to clean all the bytecode before running the action, but I wonder if is possible to handle this case within the action itself.

Here's an example run for reference: https://github.com/mgsalas/python-junit-test/runs/4567244112?check_suite_focus=true

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.