Coder Social home page Coder Social logo

joeljeske / karma-parallel Goto Github PK

View Code? Open in Web Editor NEW
82.0 2.0 31.0 882 KB

A Karma JS Framework to support sharding tests to run in parallel across multiple browsers

License: MIT License

JavaScript 100.00%
karma karma-parallel karma-plugin karma-mocha karma-jasmine karma-framework karma-sharding karma-coverage istanbul

karma-parallel's Introduction

karma-parallel

npm version npm downloads Build Status js-standard-style dependencies Status devDependencies Status

A Karma JS plugin to support sharding tests to run in parallel across multiple browsers. Now supporting code coverage!

Overview

This is intended to speed up the time it takes to run unit tests by taking advantage of multiple cores. From a single karma server, multiple instances of a browser are spun up. Each browser downloads all of the spec files, but when a describe block is encountered, the browsers deterministically decide if that block should run in the given browser.

This leads to a way to split up unit tests across multiple browsers without changing any build processes.

Installation

The easiest way is to install karma-parallel as a devDependency.

Using NPM

npm i karma-parallel --save-dev

Using Yarn

yarn add karma-parallel --dev

Examples

Basic Installation

// karma.conf.js
module.exports = function(config) {
  config.set({
    // NOTE: 'parallel' must be the first framework in the list
    frameworks: ['parallel', 'mocha' /* or 'jasmine' */],
  });
};

Additional Configuration

// karma.conf.js
module.exports = function(config) {
  config.set({
    // NOTE: 'parallel' must be the first framework in the list
    frameworks: ['parallel', 'mocha' /* or 'jasmine' */],
    plugins: [
        // add karma-parallel to the plugins if you encounter something like "karma parallel No provider for framework:parallel"
        require('karma-parallel'),
        ...
    ],
    parallelOptions: {
      executors: 4, // Defaults to cpu-count - 1
      shardStrategy: 'round-robin'
      // shardStrategy: 'description-length'
      // shardStrategy: 'custom'
      // customShardStrategy: function(config) {
      //   config.executors // number, the executors set above
      //   config.shardIndex // number, the specific index for the shard currently running
      //   config.description // string, the name of the top-level describe string. Useful //     for determining how to shard the current specs
      //
      //   // Re-implement a round-robin strategy
      //   window.parallelDescribeCount = window.parallelDescribeCount || 0;
      //   window.parallelDescribeCount++;
      //   return window.parallelDescribeCount % config.executors === config.shardIndex
      // }
      }
    }
  });
};

Options

parallelOptions [object]: Options for this plugin

parallelOptions.executors [int=cpu_cores-1]: The number of browser instances to use to test. If you test on multiple types of browsers, this spin up the number of executors for each browser type.

parallelOptions.shardStrategy [string='round-robin']: This plugin works by overriding the test suite describe() function. When it encounters a describe, it must decide if it will skip the tests inside of it, or not.

  • The round-robin style will only take every executors test suite and skip the ones in between.
  • The description-length deterministically checks the length of the description for each test suite use a modulo of the number of executors.
  • The custom allows you to use a custom function that will determine if a describe block should run in the current executor. It is a function that is serialized and re-constructed on each executor. The function will be called for every top level describe block and should return true if the describe block should run for a the current executor. The function is called with an object containing 3 properties; executors the total number of executors, shardIndex the 0-based index of the current executor, and the description the string passed to the describe block (useful for gaining context of the current description). You can create complex and custom behaviors for grouping specs together based on your needs. Note: multiple instances function of this function will exist; one for each spec, this means you cannot rely on a global state inside this function. Note: this function is serialized and re-created. This means you cannot use any closure variables. You must only reference parameters to this function (or globals that you may have setup outside of karma-parallel in your spec files.)

parallelOptions.aggregatedReporterTest [(reporter)=>boolean|regex=/coverage|istanbul|junit/i]: This is an optional regex or function used to determine if a reporter needs to only received aggregated events from the browser shards. It is used to ensure coverage reporting is accurate amongst all the shards of a browser. It is also useful for some programmatic reporters such as junit reporters that need to operate on a single set of test outputs and not once for each shard. Set to null to disable aggregated reporting.

Important Notes

Why are there extra tests in my output?

If this plugin discovers that you have focused some tests (fit, it.only, etc...) in other browser instances, it will add an extra focused test in the current browser instance to limit the running of the tests in the given browser. Similarly, when dividing up the tests, if there are not enough tests for a given browser, it will add an extra test to prevent karma from failing due to no running tests.

Run Some Tests in Every Browser

If you want to run some tests in every browser instance, add the string [always] at the beginning of the top-level describe block that contains those tests. Example:

describe('[always] A suite that runs on every shard', function() {
    // Define it() blocks here
});

Code Coverage

Code coverage support is acheived by aggregating the code coverage reports from each browser into a single coverage report. We accomplish this by wrapping the coverage reporters with an aggregate reporter. By default, we only wrap reporters that pass the test parallelOptions.aggregatedReporterTest. It should all just work.


For more information on Karma see the homepage.

See Also

karma-sharding

This similar project works by splitting up the actual spec files across the browser instances.

Pros:

  • Reduces memory by only loading some of the spec files in each browser instance

Cons:

  • Requires the spec files to reside in separate files, meaning it is not compatible with bundlers such as karma-webpack or karma-browserify as used with most front end cli projects (e.g. @angular/cli)

karma-parallel's People

Contributors

ben8p avatar chrisguttandin avatar dependabot[bot] avatar dwilson6 avatar joeljeske avatar macjohnny avatar nvladimirovi avatar segrey avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

karma-parallel's Issues

callThrough is not a function

  • **I'm submitting a ... **

    • bug report
  • What is the current behavior?
    When I use karma-parallel (in an angular 5 project) I get the error callThrough is not a function when running karma with code coverage.
    This is because lodash version 3.1 is used in my project and _.rest is returning an array there:
    https://lodash.com/docs/3.10.1#rest
    _.rest is used in reporter.js const callThrough = _.rest((fnName, args) => { ...

So I guess one have to define a lodash version >4 in karma-parallel package.json to ensure the correct lodash version?

Is there a way to let what files to run in what threads/browser instances ?

Hi,

I have 4 modules (module1, module2, module3, module4) and i want them to run in parallel.

is there a way if i just give the 4 modules at the same time, it will run all the tests inside in parallel?

Now i want to specify which module to run in which browser instance or thread ? is that possible ?

Today my test.ts says find spec files and run through context.

but i want to give 4 paths/modules to my parallel executor and let each one execute separately in parallel.

Please help

Angular 5 compatibility?

  • **I'm submitting a ... **

    • bug report
    • feature request
    • support request => Please do not submit support request here, see note at the top of this template.
  • Do you want to request a feature or report a bug?
    bug

  • What is the current behavior?
    Occasionally getting:

Cannot configure the test module when the test module has already been instantiated. Make sure you are not using `inject` before `TestBed.configureTestingModule`
  • Please tell us about your environment:
  • version: 0.2.5
  • Browser: Chrome
  • Language: TypeScript

I was able to get the project going with karma-sharding .
That being said, considering the pro of being able to move to Webpack I'm wish I could use karma-parallel instead.
I cannot recreate this in stackblitz, CLI seems to work fine... Any other information I can provide you with that might help narrow down the root cause?..

Tests are actually running in all browser instances

  • **I'm submitting a ... **

    • [ X] bug report
    • feature request
    • support request => Please do not submit support request here, see note at the top of this template.
  • Do you want to request a feature or report a bug?
    Bug

  • What is the current behavior?
    I'm getting all tests running on every browser instance.

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem via

    • set karma config like:
    {
        plugins: ['karma-chrome-launcher', 'karma-coverage', 'karma-junit-reporter', 'karma-mocha', 'karma-parallel'
        ],
        basePath: './',
        frameworks: ['parallel', 'mocha'],
        parallelOptions: {
            shardStrategy: 'round-robin',
            aggregatedReporterTest: /coverage|istanbul|junit/i
        },
        files: [
// stripped for brevity
            '../test/**/*.spec.js'
        ],
        preprocessors: {
            'scripts/**/*.js': 'coverage'
        },
        proxies: {
            '/apps': 'http://localhost:9000/apps',
            '/images': 'http://localhost:8080/images',
            '/resources': 'http://localhost:8080/resources',
            '/foo': 'http://localhost:8080/foo'
        },
        exclude: [],
        reporters: ['progress', 'coverage', 'junit'],
        junitReporter: {
            outputFile: 'test-results.xml',
            suite: ''
        },
        coverageReporter: {
            type: 'cobertura',
            dir: 'coverage/'
        },
        port: 9876,
        runnerPort: 9100,
        colors: true,
        logLevel: config.LOG_INFO,
        autoWatch: true,
        browsers: ['HeadlessChrome'],
        customLaunchers: {
            'HeadlessChrome': {
                base: 'ChromeHeadless',
                flags: [
                    '--no-sandbox'
                ]
            }
        },
        browserNoActivityTimeout: 120000,
        captureTimeout: 120000,
        singleRun: false
    }
  • Setup an desribe.only()
  • Within the only, create several it()s that throws "whatever1" ... to make them throw when the tests run
  • run the tests

When you look at the junit report, it has 3 (cpus-1) instances in it:

  <testcase name="Utils | #isDateValid() | should return FALSE when the date is before today or it is invalid |" time="0" classname="HeadlessChrome_65_0_3325_(Mac_OS_X_10_13_4).Utils |">
    <failure type="">Error: the string "crap1" was thrown, throw an Error :)
</failure>
  </testcase>
  <testcase name="Utils | #isDateValid() | should return FALSE when the date is before today or it is invalid |" time="0.001" classname="HeadlessChrome_65_0_3325_(Mac_OS_X_10_13_4).Utils |">
    <failure type="">Error: the string "crap1" was thrown, throw an Error :)
</failure>
  </testcase>
  <testcase name="Utils | #isDateValid() | should return FALSE when the date is before today or it is invalid |" time="0" classname="HeadlessChrome_65_0_3325_(Mac_OS_X_10_13_4).Utils |">
    <failure type="">Error: the string "crap1" was thrown, throw an Error :)
</failure>
  </testcase>

Also, the console shows 3:

            HeadlessChrome 65.0.3325 (Mac OS X 10.13.4) Utils | #isDateValid() | should return FALSE when the date is before today or it is invalid | FAILED
     [exec]     Error: the string "crap1" was thrown, throw an Error :)
            HeadlessChrome 65.0.3325 (Mac OS X 10.13.4) Utils | #isDateValid() | should return FALSE when the date is before today or it is invalid | FAILED
     [exec]     Error: the string "crap1" was thrown, throw an Error :)
            HeadlessChrome 65.0.3325 (Mac OS X 10.13.4) Utils | #isDateValid() | should return FALSE when the date is before today or it is invalid | FAILED
     [exec]     Error: the string "crap1" was thrown, throw an Error :)
  • What is the expected behavior?
    I'm actually not sure how this would look when it works, but I would assume that a test would get skipped in "other" browsers, and, and I don't see any tests getting skipped.

  • What is the motivation / use case for changing the behavior?
    The current behavior makes the plugin worthless. (I hope there is something weird about my settings or versions being used, as I can't imagine this is how it works for everyone else?)

  • Please tell us about your environment:

    • version: 0.2.4
    • Browser: ChromeHeadless 65.0.3325
    • Language: all
    • package.json contains these dependencies:
      "chai": "1.10.0",
      "gulp": "3.9.1",
      "gulp-util": "3.0.8",
      "istanbul": "0.4.5",
      "karma": "2.0.0",
      "karma-chai": "0.1.0",
      "karma-chrome-launcher": "2.2.0",
      "karma-coverage": "1.1.1",
      "karma-junit-reporter": "1.2.0",
      "karma-mocha": "1.3.0",
      "karma-parallel": "0.2.4",
      "mocha": "2.3.2",
      "sinon": "1.10.2",
      "sinon-chai": "3.0.0"
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

If it is used karma-junit-reporter then xml report contain tests from one browser instance only

Note: for support questions, please use stackoverflow. This repository's issues are reserved for feature requests and bug reports.

  • **I'm submitting a ... **

    • bug report
    • feature request
    • support request => Please do not submit support request here, see note at the top of this template.
  • Do you want to request a feature or report a bug?
    I would like to request new feature

  • What is the current behavior?
    If it is used karma-junit-reporter then in xml report there are tests results from one browser instance only. The tests which are run in other browser instances are not considered in the available xml report. It looks like the same issue will be with other reports as well.

  • What is the expected behavior?
    It would be nice to combine all tests into one xml report or provide one xml report per browser instance. At this case we will have possibility to import these results for all browser instances.

  • What is the motivation / use case for changing the behavior?
    We are using karma-parallel package in CI build server and we need to get actual tests results embedded into CI build. For now we have only tests from one browser instance, but it is required to have all run tests embedded into CI buil.

  • Please tell us about your environment:

    "dependencies": {
        "@angular/animations": "5.2.3",
        "@angular/common": "5.2.3",
        "@angular/compiler": "5.2.3",
        "@angular/core": "5.2.3",
        "@angular/forms": "5.2.3",
        "@angular/http": "5.2.3",
        "@angular/platform-browser": "5.2.3",
        "@angular/platform-browser-dynamic": "5.2.3",
        "@angular/platform-server": "5.2.3",
        "@angular/router": "5.2.3",
        "@nguniversal/aspnetcore-engine": "5.0.0-beta.5",
        "@nguniversal/common": "5.0.0-beta.5",
        "@ngx-translate/core": "9.1.1",
        "core-js": "2.5.3",
        "moment": "2.18.1",
        "moment-timezone": "0.5.13",
        "preboot": "6.0.0-beta.1",
        "rxjs": "5.5.6",
        "zone.js": "0.8.20"
    },
    "devDependencies": {
        "@angular/cli": "1.6.7",
        "@angular/compiler-cli": "5.2.3",
        "@angular/language-service": "5.2.3",
        "@angularclass/hmr": "~2.1.3",
        "@types/jasmine": "~2.8.3",
        "@types/jasminewd2": "~2.0.2",
        "@types/node": "~9.4.1",
        "codelyzer": "~4.1.0",
        "copy-webpack-plugin": "~4.3.1",
        "jasmine-core": "~2.8.0",
        "jasmine-reporters": "~2.3.0",
        "jasmine-spec-reporter": "~4.2.1",
        "karma": "~2.0.0",
        "karma-chrome-launcher": "~2.2.0",
        "karma-cli": "~1.0.1",
        "karma-coverage-istanbul-reporter": "~1.4.1",
        "karma-jasmine": "~1.1.0",
        "karma-jasmine-html-reporter": "~0.2.2",
        "karma-junit-reporter": "~1.2.0",
        "karma-parallel": "~0.2.1",
        "protractor": "~5.3.0",
        "protractor-jasmine2-screenshot-reporter": "~0.5.0",
        "rxjs-tslint-rules": "~3.8.0",
        "ts-node": "~4.1.0",
        "tslint": "~5.9.1",
        "tslint-consistent-codestyle": "~1.11.1",
        "tslint-jasmine-rules": "~1.3.0",
        "typescript": "~2.5.3"
    }

karma config file is the following:

// Karma configuration file, see link for more information
// https://karma-runner.github.io/1.0/config/configuration-file.html

module.exports = function (config) {
    config.set({
        basePath: '',
        frameworks: ['parallel', 'jasmine', '@angular/cli'],
        parallelOptions: {
            // Defaults to cpu-count - 1
            executors: 3,
            shardStrategy: 'round-robin'
            // shardStrategy: 'description-length'  
        },
        plugins: [
            require('karma-jasmine'),
            require('karma-chrome-launcher'),
            require('karma-jasmine-html-reporter'),
            require('karma-coverage-istanbul-reporter'),
            require('karma-junit-reporter'),
            require('@angular/cli/plugins/karma'),
            require('karma-parallel')
        ],
        client: {
            // leave Jasmine Spec Runner output visible in browser
            clearContext: false
        },
        coverageIstanbulReporter: {
            reports: ['html', 'lcovonly', 'cobertura'],
            fixWebpackSourcePaths: true,
             thresholds: {
                // thresholds for all files
                global: {
                    statements: 97,
                    lines: 97,
                    branches: 74,
                    functions: 95
                },
             }
        },
        angularCli: {
            environment: 'dev'
        },
        files: [
            'https://betclicstage.com/r2/svcshared/st02/WebResources/build/WebApp/Sport/mobile.css'
        ],
        reporters: ['progress', 'kjhtml'],
        port: 35000,
        colors: true,
        logLevel: config.LOG_INFO,
        autoWatch: false,
        browsers: ['ChromeHeadless'],
        singleRun: true,
        concurrency: Infinity,
        customLaunchers: {
            ChromeHeadless: {
                base: 'Chrome',
                flags: [
                    '--headless',
                    '--disable-gpu',
                    // Without a remote debugging port, Google Chrome exits immediately.
                    `--remote-debugging-port=35001`,
                    '--no-sandbox'
                    // '--max_old_space_size=4096'
                ],
                debug: false
            }
        },
        // How long does Karma wait for a browser to reconnect (in ms).
        // With a flaky connection, it is pretty common that the browser disconnects, but the actual test execution is still running without any problems.
        // Karma does not treat a disconnection as an immediate failure and will wait for browserDisconnectTimeout (ms).If the browser reconnects during that time, everything is fine.
        // Default: 2000
        browserDisconnectTimeout: 6000,
        // The number of disconnections tolerated.
        // The disconnectTolerance value represents the maximum number of tries a browser will attempt in the case of a disconnection.
        // Usually, any disconnection is considered a failure, but this option allows you to define a tolerance level when there is a flaky network link between the Karma server and the browsers.
        // Default: 0
        browserDisconnectTolerance: 3,
        // How long will Karma wait for a message from a browser before disconnecting from it (in ms).
        // If, during test execution, Karma does not receive any message from a browser within browserNoActivityTimeout(ms), it will disconnect from the browser.
        // Default: 10000
        browserNoActivityTimeout: 60000,
        // Timeout for capturing a browser (in ms).
        // The captureTimeout value represents the maximum boot-up time allowed for a browser to start and connect to Karma.
        // If any browser does not get captured within the timeout, Karma will kill it and try to launch it again and, after three attempts to capture it, Karma will give up.
        // Default: 60000
        captureTimeout: 120000,
        // Karma will report all the tests that are slower than given time limit (in ms).This is disabled by default (since the default value is 0).
        reportSlowerThan: 1000
        // More info about disconnected issue is here:
        // https://github.com/karma-runner/karma-chrome-launcher/issues/154
        // https://github.com/karma-runner/karma/issues/2652
    });
};

I run tests via the following command:
ng test --watch=false --reporters=progress+junit+coverage-istanbul --single-run=true --config=karma.headless.parallel.conf.js

  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

Terminal/console doesn't return control when tests are finished since 0.2.0 package version

Note: for support questions, please use stackoverflow. This repository's issues are reserved for feature requests and bug reports.

  • **I'm submitting a ... **

    • bug report
    • feature request
    • support request => Please do not submit support request here, see note at the top of this template.
  • What is the current behavior?
    Console/terminal doesn't return control when tests are finished. As a result the package for now can be considered for manual run only, it can't be used in any automated build process where it is expected to run next steps if tests were finished successfully.

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
    if I run a command ng test --watch=false --code-coverage=true --single-run=true --progress=true --colors=true --log-level=debug --reporters=progress,junit,coverage-istanbul --config=karma.conf.headless.js the console/terminal doesn't return control when tests are finished.

Here is my karma.conf.headless.js file:

// Karma configuration file, see link for more information
// https://karma-runner.github.io/1.0/config/configuration-file.html

module.exports = function (config) {
    config.set({
        basePath: '',
        frameworks: ['parallel', 'jasmine', '@angular/cli'],
        parallelOptions: {
            executors: 4, // Defaults to cpu-count - 1
            shardStrategy: 'round-robin'
            // shardStrategy: 'description-length'  
        },
        plugins: [
            require('karma-jasmine'),
            require('karma-chrome-launcher'),
            require('karma-jasmine-html-reporter'),
            require('karma-coverage-istanbul-reporter'),
            require('karma-junit-reporter'),
            require('@angular/cli/plugins/karma'),
            require('karma-parallel')
        ],
        client: {
            clearContext: false // leave Jasmine Spec Runner output visible in browser
        },
        coverageIstanbulReporter: {
            reports: ['html', 'lcovonly', 'cobertura'],
            fixWebpackSourcePaths: true,
            // Most reporters accept additional config options. You can pass these through the `report-config` option
            'report-config': {
                // all options available at: https://github.com/istanbuljs/istanbuljs/blob/aae256fb8b9a3d19414dcf069c592e88712c32c6/packages/istanbul-reports/lib/html/index.js#L135-L137
                html: {
                    // outputs the report in ./coverage/html
                    subdir: 'html'
                }
            },
            thresholds: {
                // thresholds for all files
                global: {
                    statements: 95,
                    lines: 95,
                    branches: 73,
                    functions: 95
                },
            }
        },
        angularCli: {
            environment: 'dev'
        },
        files: [
        ],
        reporters: ['progress', 'kjhtml'],
        port: Math.floor((Math.random() * 500) + 9500),
        colors: true,
        logLevel: config.LOG_INFO,
        autoWatch: false,
        browsers: ['ChromeHeadless'],
        singleRun: true,
        concurrency: Infinity,
        customLaunchers: {
            ChromeHeadless: {
                base: 'Chrome',
                flags: [
                    '--headless',
                    '--disable-gpu',
                    // Without a remote debugging port, Google Chrome exits immediately.
                    `--remote-debugging-port=${Math.floor((Math.random() * 500) + 9000)}`,
                    '--no-sandbox'
                    // '--max_old_space_size=4096'
                ],
                debug: false
            }
        },
        // How long does Karma wait for a browser to reconnect (in ms).
        // With a flaky connection, it is pretty common that the browser disconnects, but the actual test execution is still running without any problems.
        // Karma does not treat a disconnection as an immediate failure and will wait for browserDisconnectTimeout (ms).If the browser reconnects during that time, everything is fine.
        // Default: 2000
        browserDisconnectTimeout: 6000,
        // The number of disconnections tolerated.
        // The disconnectTolerance value represents the maximum number of tries a browser will attempt in the case of a disconnection.
        // Usually, any disconnection is considered a failure, but this option allows you to define a tolerance level when there is a flaky network link between the Karma server and the browsers.
        // Default: 0
        browserDisconnectTolerance: 3,
        // How long will Karma wait for a message from a browser before disconnecting from it (in ms).
        // If, during test execution, Karma does not receive any message from a browser within browserNoActivityTimeout(ms), it will disconnect from the browser.
        // Default: 10000
        browserNoActivityTimeout: 60000,
        // Timeout for capturing a browser (in ms).
        // The captureTimeout value represents the maximum boot-up time allowed for a browser to start and connect to Karma.
        // If any browser does not get captured within the timeout, Karma will kill it and try to launch it again and, after three attempts to capture it, Karma will give up.
        // Default: 60000
        captureTimeout: 120000,
        // Karma will report all the tests that are slower than given time limit (in ms).This is disabled by default (since the default value is 0).
        reportSlowerThan: 1000
        // More info about disconnected issue is here:
        // https://github.com/karma-runner/karma-chrome-launcher/issues/154
        // https://github.com/karma-runner/karma/issues/2652
    });
};
  • What is the expected behavior?
    Control is returned as soon as tests finished with the appropriate code depends on tests results

  • What is the motivation / use case for changing the behavior?
    To be able to use this great package in automated build process which doesn't require manual run (build check-in policy, for example)

  • Please tell us about your environment:

  • package version: 0.2.0
  • Angular CLI 1.4.9
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

Default Specs per thread when no specs found - confusing!

First, thanks for writing this library, it's a great effort.

Sometimes, for a number of reasons (browser disconnects, using xit or xdescribe) when no tests are included in the karma test suite run, and you run them using this library, it creates a default spec for each thread. This seems a little unorthodox and makes you think all tests have run successfully, unless you look at the number or the description of the test.

The lines of code are here:

 if (!hasSpecs) {
        ctx.describe('[karma-parallel] Add single test to prevent failure', function() {
          (ctx.it || ctx.specify).call(ctx, 'should prevent failing by having sucessful tests', function(){});
        });
}

See https://github.com/joeljeske/karma-parallel/blob/master/lib/karma-parallelizer.js#L151

Can I ask that we remove this behaviour if no specs are found?

This is changing the basic implementation of the test runner.

How to configure any particular test suites to be distributed to different shards in the most efficient way?

  • **I'm submitting a ... **

    • bug report
    • feature request
  • Do you want to request a feature or report a bug?
    It would be nice to add any possibility to configure any particular test suites to be distributed to different tests shards. For example, we have 4-5 test suites which executes quite long. It would be nice if we could configure these tests suites to be distributed in different shards. If we have 3 shards and 5 long running test suites, then we would like to be sure if one shard does execute 2 long running test suites, the second shard - 2 long running test suite, the third shard - the left one long running test suite.

  • What is the current behavior?
    For now we can configure it via different test suite name/length of description but it doesn't guarantee the expected behavior because naming of other tests can be changed as well and it affect these long running test suites.

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

  • What is the expected behavior?
    We have 4-5 test suites which are executed quite a long time. Quite often they are executed by the same shard if we don't provide any tricky point with naming. We would to be sure if any particular test suites will be distributed accordingly between all shards. For example, it can be done via some specific naming in test suite description like it was done for [always] predicate previously. Let's imagine, if test suite is started with '[Id=[unique-id]] Test suite description' then all test suites with the same unique-id will be distributed by different shards in the most efficient way.

  • Please tell us about your environment:

  • version: 2.0.0-beta.X
  • Browser: [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
  • Language: [all | TypeScript X.X | ES6/7 | ES5 | Dart]
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

Slower than just running in sequence

Running "karma:source" (karma) task
18 01 2018 23:41:26.613:INFO [framework:karma-parallel]: sharding specs across 4 browsers
18 01 2018 23:41:27.591:INFO [karma]: Karma v0.13.22 server started at https://localhost:9876/
18 01 2018 23:41:27.597:INFO [launcher]: Starting browser Chrome
18 01 2018 23:41:27.618:INFO [launcher]: Starting browser Chrome
18 01 2018 23:41:27.624:INFO [launcher]: Starting browser Chrome
18 01 2018 23:41:27.632:INFO [launcher]: Starting browser Chrome
18 01 2018 23:41:37.012:INFO [Chrome 63.0.3239 (Mac OS X 10.12.6)]: Connected on socket uiaNpBrlnoz7lB4bAAAA with id 56325298
18 01 2018 23:41:37.172:INFO [Chrome 63.0.3239 (Mac OS X 10.12.6)]: Connected on socket NTZxRMD7CKRMXZy5AAAC with id 99647674
18 01 2018 23:41:37.277:INFO [Chrome 63.0.3239 (Mac OS X 10.12.6)]: Connected on socket SoejE-KmEwBQqQKfAAAD with id 88877803
18 01 2018 23:41:37.287:INFO [Chrome 63.0.3239 (Mac OS X 10.12.6)]: Connected on socket 9u2Jwcy1NAahSKL1AAAB with id 81113574
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 560 of 1075 SUCCESS (0 secs / 52.713 secs)
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 567 of 1075 SUCCESS (0 secs / 52.832 secs)
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 567 of 1075 SUCCESS (0 secs / 52.832 secs)
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 602 of 1075 SUCCESS (0 secs / 53.641 secs)
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 612 of 1075 SUCCESS (0 secs / 53.907 secs)
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 627 of 1075 SUCCESS (0 secs / 54.26 secs)
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 713 of 1075 SUCCESS (0 secs / 56.603 secs)
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 792 of 1075 SUCCESS (0 secs / 57.37 secs)
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 844 of 1075 SUCCESS (0 secs / 58.078 secs)
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 850 of 1075 SUCCESS (0 secs / 58.153 secs)
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 1075 of 1075 SUCCESS (1 min 2.914 secs / 1 min 2.092 secs)
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 1075 of 1075 SUCCESS (1 min 4.343 secs / 1 min 3.21 secs)
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 1075 of 1075 SUCCESS (59.78 secs / 59.114 secs)
Chrome 63.0.3239 (Mac OS X 10.12.6): Executed 1075 of 1075 SUCCESS (1 min 2.33 secs / 1 min 1.078 secs)
TOTAL: 4300 SUCCESS

I also would have expected the 1,075 tests (123 matches of describe() across 122 files) to have been divided by 4 and then executed across the browsers.

I wonder if there's a cool/fast way to help speed up the loading of 4 browsers. Maybe one browser, then 4 tabs within it?

When browser disconnects, remaining tests are skipped

I'm submitting a ...

  • [ X ] bug report

What is the current behavior?
When a test (shard) disconnects, that shard is restarted but it only runs 1 more test in that set of tests giving it a false SUCCESS.

More info
We had tests that had been failing on our CI but we didn't know it because the build was being reported as successful. Checking in new tests caused the shards to change revealing failures for tests that were previously being skipped

The logs below are at the time when one of the browsers disconnects and it's restarted with a single test.

[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 445 of 462 SUCCESS (13 mins 20.993 secs / 3 mins 2.351 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 488 of 496 (skipped 8) SUCCESS (12 mins 31.592 secs / 3 mins 7.926 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 374 of 375 (skipped 1) SUCCESS (9 mins 18.06 secs / 1 min 30.408 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 433 of 455 (skipped 11) SUCCESS (17 mins 5.845 secs / 4 mins 10.347 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 361 of 439 (skipped 23) SUCCESS (20 mins 55.986 secs / 3 mins 59.233 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 277 of 543 (skipped 9) SUCCESS (0 secs / 1 min 55.531 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 321 of 345 (skipped 9) SUCCESS (0 secs / 1 min 36.614 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 277 of 543 (skipped 9) SUCCESS (0 secs / 1 min 55.531 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 445 of 462 SUCCESS (13 mins 20.993 secs / 3 mins 2.351 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 488 of 496 (skipped 8) SUCCESS (12 mins 31.592 secs / 3 mins 7.926 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 374 of 375 (skipped 1) SUCCESS (9 mins 18.06 secs / 1 min 30.408 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 433 of 455 (skipped 11) SUCCESS (17 mins 5.845 secs / 4 mins 10.347 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 361 of 439 (skipped 23) SUCCESS (20 mins 55.986 secs / 3 mins 59.233 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 277 of 543 (skipped 9) SUCCESS (0 secs / 1 min 55.531 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 321 of 345 (skipped 9) SUCCESS (15 mins 51.026 secs / 1 min 36.614 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 277 of 543 (skipped 9) SUCCESS (0 secs / 1 min 55.531 secs)
[INFO] �[36m10 08 2019 10:58:57.585:DEBUG [launcher]: �[39mProcess ChromeHeadless exited with code 0
[INFO] �[36m10 08 2019 10:58:57.586:DEBUG [temp-dir]: �[39mCleaning temp dir /build/mts/release/sb-26750517/tmp/karma-85401258
[INFO] �[32m10 08 2019 10:58:57.646:INFO [launcher]: �[39mStarting browser ChromeHeadless
[INFO] �[36m10 08 2019 10:58:57.646:DEBUG [temp-dir]: �[39mCreating temp dir at /build/mts/release/sb-26750517/tmp/karma-57661985
[INFO] �[36m10 08 2019 10:58:57.647:DEBUG [launcher]: �[39m/build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/puppeteer/.local-chromium/mac-499413/chrome-mac/Chromium.app/Contents/MacOS/Chromium --user-data-dir=/build/mts/release/sb-26750517/tmp/karma-57661985 --no-default-browser-check --no-first-run --disable-default-apps --disable-popup-blocking --disable-translate --disable-background-timer-throttling --disable-renderer-backgrounding --disable-device-discovery-notifications http://localhost:9876/?id=57661985 --headless --disable-gpu --remote-debugging-port=9222
[INFO] �[36m10 08 2019 10:58:58.991:DEBUG [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/karma/static/client.html
[INFO] �[36m10 08 2019 10:58:59.098:DEBUG [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/karma/static/karma.js
[INFO] �[36m10 08 2019 10:58:59.727:DEBUG [karma]: �[39mA browser has connected on socket iPW9xSfxDhqUoxXYAAAR
[INFO] �[36m10 08 2019 10:58:59.776:DEBUG [web-server]: �[39mupgrade /socket.io/?EIO=3&transport=websocket&sid=iPW9xSfxDhqUoxXYAAAR
[INFO] �[32m10 08 2019 10:58:59.823:INFO [HeadlessChrome 63.0.3205 (Mac OS X 10.9.5)]: �[39mConnected on socket iPW9xSfxDhqUoxXYAAAR with id 57661985
[INFO] �[36m10 08 2019 10:58:59.824:DEBUG [framework:karma-parallel]: �[39mregistering browser id 57661985 with aggregated browser id 372422729345 at shard index 8
[INFO] �[36m10 08 2019 10:58:59.824:DEBUG [launcher]: �[39mChromeHeadless (id 57661985) captured in 3679.386 secs
[INFO] �[36m10 08 2019 10:58:59.928:DEBUG [middleware:karma]: �[39mcustom files null null null
[INFO] �[36m10 08 2019 10:58:59.929:DEBUG [middleware:karma]: �[39mServing static request /context.html
[INFO] �[36m10 08 2019 10:58:59.929:DEBUG [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/karma/static/context.html
...
[INFO] �[36m10 08 2019 10:58:59.961:DEBUG [middleware:parallel]: �[39minterpolating parallel shard data map in script. Data: {"1394027":{"shouldShard":true,"shardIndex":4,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"},"18315375":{"shouldShard":true,"shardIndex":2,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"},"21084253":{"shouldShard":true,"shardIndex":1,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"},"22596072":{"shouldShard":true,"shardIndex":7,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"},"57661985":{"shouldShard":true,"shardIndex":8,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"},"74312172":{"shouldShard":true,"shardIndex":3,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"},"85401258":{"shouldShard":true,"shardIndex":6,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"},"96928968":{"shouldShard":true,"shardIndex":0,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"}}
[INFO] �[36m10 08 2019 10:58:59.963:DEBUG [middleware:source-files]: �[39mRequesting /base/node_modules/@clr/ui/clr-ui.css?3db758150e273cb5e4fb6f6ca7d2ba31be036245 /
[INFO] �[36m10 08 2019 10:58:59.964:DEBUG [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/@clr/ui/clr-ui.css
...[INFO] �[36m10 08 2019 10:59:00.059:DEBUG [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/src/main/bootstrap.test.bundle.js

[INFO] �[36m10 08 2019 10:59:00.060:DEBUG [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/web-animations-js/web-animations.min.js
[INFO] �[36m10 08 2019 10:59:00.060:DEBUG [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/src/main/bootstrap.test.bundle.js
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 445 of 462 SUCCESS (13 mins 20.993 secs / 3 mins 2.351 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 488 of 496 (skipped 8) SUCCESS (12 mins 31.592 secs / 3 mins 7.926 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 374 of 375 (skipped 1) SUCCESS (9 mins 18.06 secs / 1 min 30.408 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 433 of 455 (skipped 11) SUCCESS (17 mins 5.845 secs / 4 mins 10.347 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 361 of 439 (skipped 23) SUCCESS (20 mins 55.986 secs / 3 mins 59.233 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 278 of 543 (skipped 9) SUCCESS (0 secs / 1 min 56.317 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 321 of 345 (skipped 9) SUCCESS (15 mins 51.026 secs / 1 min 36.614 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 278 of 543 (skipped 9) SUCCESS (0 secs / 1 min 56.317 secs)
[INFO]
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 445 of 462 SUCCESS (13 mins 20.993 secs / 3 mins 2.351 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 488 of 496 (skipped 8) SUCCESS (12 mins 31.592 secs / 3 mins 7.926 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 374 of 375 (skipped 1) SUCCESS (9 mins 18.06 secs / 1 min 30.408 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 433 of 455 (skipped 11) SUCCESS (17 mins 5.845 secs / 4 mins 10.347 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 361 of 439 (skipped 23) SUCCESS (20 mins 55.986 secs / 3 mins 59.233 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 278 of 543 (skipped 9) SUCCESS (0 secs / 1 min 56.317 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 321 of 345 (skipped 9) SUCCESS (15 mins 51.026 secs / 1 min 36.614 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 278 of 543 (skipped 9) SUCCESS (0 secs / 1 min 56.317 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 0 of 1 SUCCESS (0 secs / 0 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 445 of 462 SUCCESS (13 mins 20.993 secs / 3 mins 2.351 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 488 of 496 (skipped 8) SUCCESS (12 mins 31.592 secs / 3 mins 7.926 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 374 of 375 (skipped 1) SUCCESS (9 mins 18.06 secs / 1 min 30.408 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 433 of 455 (skipped 11) SUCCESS (17 mins 5.845 secs / 4 mins 10.347 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 361 of 439 (skipped 23) SUCCESS (20 mins 55.986 secs / 3 mins 59.233 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 278 of 543 (skipped 9) SUCCESS (0 secs / 1 min 56.317 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 321 of 345 (skipped 9) SUCCESS (15 mins 51.026 secs / 1 min 36.614 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 278 of 543 (skipped 9) SUCCESS (0 secs / 1 min 56.317 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 1 of 1 SUCCESS (0 secs / 0.021 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 445 of 462 SUCCESS (13 mins 20.993 secs / 3 mins

This is the end of a not so successful run

[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 445 of 462 SUCCESS (13 mins 20.993 secs / 3 mins 2.351 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 488 of 496 (skipped 8) SUCCESS (12 mins 31.592 secs / 3 mins 7.926 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 374 of 375 (skipped 1) SUCCESS (9 mins 18.06 secs / 1 min 30.408 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 433 of 455 (skipped 11) SUCCESS (17 mins 5.845 secs / 4 mins 10.347 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 361 of 439 (skipped 23) SUCCESS (20 mins 55.986 secs / 3 mins 59.233 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 508 of 543 (skipped 10) SUCCESS (17 mins 39.158 secs / 4 mins 51.437 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 321 of 345 (skipped 9) SUCCESS (15 mins 51.026 secs / 1 min 36.614 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 508 of 543 (skipped 10) SUCCESS (17 mins 39.158 secs / 4 mins 51.437 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 1 of 1 SUCCESS (0.082 secs / 0.021 secs)
[INFO] TOTAL: 2931 SUCCESS
[INFO] �[36m10 08 2019 11:05:37.736:DEBUG [karma]: �[39mRun complete, exiting.
[INFO] �[36m10 08 2019 11:05:37.737:DEBUG [launcher]: �[39mDisconnecting all browsers
[INFO] �[36m10 08 2019 11:05:39.813:DEBUG [launcher]: �[39mProcess ChromeHeadless exited with code 0
[INFO] �[36m10 08 2019 11:05:39.814:DEBUG [temp-dir]: �[39mCleaning temp dir /build/mts/release/sb-26750517/tmp/karma-22596072
[INFO] �[36m10 08 2019 11:05:39.821:DEBUG [launcher]: �[39mFinished all browsers

One shard that seem to be absent is the following (with 490 tests), which had been logged earlier

2019-08-10 10:37:26 gobuilds.Compile : [INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 270 of 490 (skipped 2) SUCCESS (0 secs / 53.286 secs)

That shard had not connected back to the karma server for over 7 minutes. Here's the log for that disconnect:

[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 488 of 496 (skipped 8) SUCCESS (12 mins 31.592 secs / 3 mins 7.926 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 374 of 375 (skipped 1) SUCCESS (9 mins 18.06 secs / 1 min 30.408 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 433 of 455 (skipped 11) SUCCESS (17 mins 5.845 secs / 4 mins 10.347 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 361 of 439 (skipped 23) SUCCESS (20 mins 55.986 secs / 3 mins 59.233 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 359 of 490 (skipped 2) SUCCESS (0 secs / 1 min 48.384 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 67 of 345 (skipped 4) SUCCESS (0 secs / 15.553 secs)
[INFO] �[36m10 08 2019 10:46:59.205:DEBUG [HeadlessChrome 63.0.3205 (Mac OS X 10.9.5)]: �[39mDisconnected during run, waiting 300000ms for reconnecting.
[INFO] �[33m10 08 2019 10:47:17.884:WARN [HeadlessChrome 63.0.3205 (Mac OS X 10.9.5)]: �[39mDisconnected (1 times)
[INFO] �[1A�[2K�[1A�[2K�[1A�[2K�[1A�[2K�[1A�[2K�[1A�[2K�[1A�[2KHeadlessChrome 63.0.3205 (Mac OS X 10.9.5) ERROR
[INFO]   Disconnectedundefined
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 445 of 462 SUCCESS (13 mins 20.993 secs / 3 mins 2.351 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 488 of 496 (skipped 8) SUCCESS (12 mins 31.592 secs / 3 mins 7.926 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 374 of 375 (skipped 1) SUCCESS (9 mins 18.06 secs / 1 min 30.408 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 433 of 455 (skipped 11) SUCCESS (17 mins 5.845 secs / 4 mins 10.347 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 361 of 439 (skipped 23) SUCCESS (20 mins 55.986 secs / 3 mins 59.233 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 359 of 490 (skipped 2) DISCONNECTED (16 mins 12.71 secs / 1 min 48.384 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 67 of 345 (skipped 4) SUCCESS (0 secs / 15.553 secs)
[INFO] �[1A�[2K�[1A�[2K�[1A�[2K�[1A�[2K�[1A�[2K�[1A�[2K�[1A�[2KHeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 445 of 462 SUCCESS (13 mins 20.993 secs / 3 mins 2.351 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 488 of 496 (skipped 8) SUCCESS (12 mins 31.592 secs / 3 mins 7.926 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 374 of 375 (skipped 1) SUCCESS (9 mins 18.06 secs / 1 min 30.408 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 433 of 455 (skipped 11) SUCCESS (17 mins 5.845 secs / 4 mins 10.347 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 361 of 439 (skipped 23) SUCCESS (20 mins 55.986 secs / 3 mins 59.233 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 359 of 490 (skipped 2) DISCONNECTED (16 mins 12.71 secs / 1 min 48.384 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 67 of 345 (skipped 4) SUCCESS (0 secs / 15.553 secs)
[INFO] �[karma]: �[39mRestarting HeadlessChrome 63.0.3205 (Mac OS X 10.9.5) (1 of 2 attempts)
[INFO] � [launcher]: �[39mProcess ChromeHeadless exited with code 0
[INFO] � [temp-dir]: �[39mCleaning temp dir /build/mts/release/sb-26750517/tmp/karma-22596072
[INFO] � [launcher]: �[39mRestarting ChromeHeadless
[INFO] � [temp-dir]: �[39mCreating temp dir at /build/mts/release/sb-26750517/tmp/karma-22596072
[INFO] � [launcher]: �[39m/build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/puppeteer/.local-chromium/mac-499413/chrome-mac/Chromium.app/Contents/MacOS/Chromium --user-data-dir=/build/mts/release/sb-26750517/tmp/karma-22596072 --no-default-browser-check --no-first-run --disable-default-apps --disable-popup-blocking --disable-translate --disable-background-timer-throttling --disable-renderer-backgrounding --disable-device-discovery-notifications http://localhost:9876/?id=22596072 --headless --disable-gpu --remote-debugging-port=9222
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/karma/static/client.html
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/karma/static/karma.js
[INFO] � [karma]: �[39mA browser has connected on socket m6X7yWpx9pIADeQhAAAO
[INFO] � [web-server]: �[39mupgrade /socket.io/?EIO=3&transport=websocket&sid=m6X7yWpx9pIADeQhAAAO
[INFO] �[HeadlessChrome 63.0.3205 (Mac OS X 10.9.5)]: �[39mConnected on socket m6X7yWpx9pIADeQhAAAO with id 22596072
[INFO] � [framework:karma-parallel]: �[39mregistering browser id 22596072 with aggregated browser id 372422729345 at shard index 7
[INFO] � [launcher]: �[39mChromeHeadless (id 22596072) captured in 2980.016 secs
[INFO] � [middleware:karma]: �[39mcustom files null null null
[INFO] � [middleware:karma]: �[39mServing static request /context.html
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/karma/static/context.html
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/karma/static/context.js
[INFO] � [middleware:source-files]: �[39mRequesting /base/node_modules/jasmine-core/lib/jasmine-core/jasmine.js?916005cc407925f4764668d61d04888d59258f5d /
[INFO] � [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/jasmine-core/lib/jasmine-core/jasmine.js
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/jasmine-core/lib/jasmine-core/jasmine.js
[INFO] � [middleware:parallel]: �[39minterpolating parallel shard data map in script. Data: {"1394027":{"shouldShard":true,"shardIndex":4,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"},"18315375":{"shouldShard":true,"shardIndex":2,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"},"21084253":{"shouldShard":true,"shardIndex":1,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"},"22596072":{"shouldShard":true,"shardIndex":7,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"},"74312172":{"shouldShard":true,"shardIndex":3,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"},"85401258":{"shouldShard":true,"shardIndex":6,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"},"96928968":{"shouldShard":true,"shardIndex":0,"executors":8,"shardStrategy":"round-robin","customShardStrategy":"ZnVuY3Rpb24gKCkgewogICAgICB0aHJvdyBuZXcgRXJyb3IoCiAgICAgICAgJ1Nob3VsZCBzcGVjaWZ5IGEgImN1c3RvbVNoYXJkU3RyYXRlZ3kiIGZ1bmN0aW9uIHdoZW4gdXNpbmcgc2hhcmRTdHJhdGVneTogImN1c3RvbSInCiAgICAgICk7CiAgICB9"}}
[INFO] � [middleware:source-files]: �[39mRequesting /base/node_modules/jasmine-ajax/lib/mock-ajax.js?df88a27d939f0dd46c1dc30c36edfe6d7e053d60 /
[INFO] � [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/jasmine-ajax/lib/mock-ajax.js
[INFO] � [middleware:source-files]: �[39mRequesting /base/node_modules/karma-jasmine/lib/boot.js?945a38bf4e45ad2770eb94868231905a04a0bd3e /
[INFO] � [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/karma-jasmine/lib/boot.js
[INFO] � [middleware:source-files]: �[39mRequesting /base/node_modules/karma-jasmine/lib/adapter.js?7a813cc290d592e664331c573a1a796192cdd1ad /
[INFO] � [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/karma-jasmine/lib/adapter.js
[INFO] � [middleware:source-files]: �[39mRequesting /base/node_modules/@clr/ui/clr-ui.css?3db758150e273cb5e4fb6f6ca7d2ba31be036245 /
[INFO] � [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/@clr/ui/clr-ui.css
[INFO] � [middleware:source-files]: �[39mRequesting /base/dist/styles/themes/theme-default.css?ac498c850ea4ef089e799d506cdbc2c4ec0e8de2 /
[INFO] � [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/dist/styles/themes/theme-default.css
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/jasmine-ajax/lib/mock-ajax.js
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/karma-jasmine/lib/boot.js
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/karma-jasmine/lib/adapter.js
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/@clr/ui/clr-ui.css
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/dist/styles/themes/theme-default.css
[INFO] � [middleware:source-files]: �[39mRequesting /base/node_modules/systemjs/dist/system.js?9b219a5749355e16d09b90e6e93e3db047c3b8be /
[INFO] � [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/systemjs/dist/system.js
[INFO] � [middleware:source-files]: �[39mRequesting /base/node_modules/core-js/client/shim.min.js?17dee087101c3d1b07880df4b15dbcfa9b208b30 /
[INFO] � [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/core-js/client/shim.min.js
[INFO] � [middleware:source-files]: �[39mRequesting /base/node_modules/mutationobserver-shim/dist/mutationobserver.min.js?1ad83990b11a736e06d4114f86941190cd115a6d /
[INFO] � [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/mutationobserver-shim/dist/mutationobserver.min.js
[INFO] � [middleware:source-files]: �[39mRequesting /base/node_modules/@webcomponents/custom-elements/custom-elements.min.js?736577d836e7526cb79c7b4eae056bd9d868948f /
[INFO] � [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/@webcomponents/custom-elements/custom-elements.min.js
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/systemjs/dist/system.js
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/core-js/client/shim.min.js
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/mutationobserver-shim/dist/mutationobserver.min.js
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/@webcomponents/custom-elements/custom-elements.min.js
[INFO] � [middleware:source-files]: �[39mRequesting /base/src/main/bootstrap.test.bundle.js?11ce64288816aff4f2c74bcde464782c0484ae5e /
[INFO] � [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/src/main/bootstrap.test.bundle.js
[INFO] � [middleware:source-files]: �[39mRequesting /base/node_modules/web-animations-js/web-animations.min.js?83042534752a6d9b0a4a086517e19230416feba4 /
[INFO] � [middleware:source-files]: �[39mFetching /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/web-animations-js/web-animations.min.js
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/src/main/bootstrap.test.bundle.js
[INFO] � [web-server]: �[39mserving (cached): /build/mts/release/sb-26750517/vcd-ui/content/core/node_modules/web-animations-js/web-animations.min.js
[INFO] 
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 445 of 462 SUCCESS (13 mins 20.993 secs / 3 mins 2.351 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 488 of 496 (skipped 8) SUCCESS (12 mins 31.592 secs / 3 mins 7.926 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 374 of 375 (skipped 1) SUCCESS (9 mins 18.06 secs / 1 min 30.408 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 433 of 455 (skipped 11) SUCCESS (17 mins 5.845 secs / 4 mins 10.347 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 361 of 439 (skipped 23) SUCCESS (20 mins 55.986 secs / 3 mins 59.233 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 0 of 543 SUCCESS (0 secs / 0 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 67 of 345 (skipped 4) SUCCESS (0 secs / 15.553 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 0 of 543 SUCCESS (0 secs / 0 secs)
[INFO] �[36m10 08 2019 10:48:19.841:DEBUG [karma]: �[39mA browser has connected on socket iUO7OrOOlA2q-QboAAAP
[INFO] �[36m10 08 2019 10:48:19.999:DEBUG [web-server]: �[39mupgrade /socket.io/?EIO=3&transport=websocket&sid=iUO7OrOOlA2q-QboAAAP
[INFO] �[36m10 08 2019 10:48:20.272:DEBUG [HeadlessChrome 63.0.3205 (Mac OS X 10.9.5)]: �[39mReconnected on iUO7OrOOlA2q-QboAAAP.
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 445 of 462 SUCCESS (13 mins 20.993 secs / 3 mins 2.351 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 488 of 496 (skipped 8) SUCCESS (12 mins 31.592 secs / 3 mins 7.926 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 374 of 375 (skipped 1) SUCCESS (9 mins 18.06 secs / 1 min 30.408 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 433 of 455 (skipped 11) SUCCESS (17 mins 5.845 secs / 4 mins 10.347 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 361 of 439 (skipped 23) SUCCESS (20 mins 55.986 secs / 3 mins 59.233 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 0 of 543 SUCCESS (0 secs / 0 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 68 of 345 (skipped 4) SUCCESS (0 secs / 16.488 secs)
[INFO] HeadlessChrome 63.0.3205 (Mac OS X 10.9.5): Executed 0 of 543 SUCCESS (0 secs / 0 secs)


And it looks like it was replaced with a duplicate of the shard with 543 (the above logs show the duplicate entry for that shard.

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

We have not been able to create a minimal demo

  • What is the expected behavior?

When a shard disconnects, it either runs the entire, or restarts from the test that had a problem.

  • Please tell us about your environment:
  • version: 0.3.1
  • Browser: [Headless Chrome ]
  • Language: [TypeScript]
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

I can provide a full log of different runs if it helps. I can also try any suggestions. I understand this is not the best bug submission but it does look like there's a problem when restarting a shard after the browser disconnects.

Should fail the test cases split to a browser that is disconnected

  • **I'm submitting a ... **

    • bug report
    • feature request
  • Do you want to request a feature or report a bug?

bug

  • What is the current behavior?

Today my ci-cd pipeline failed unexpectedly after merging a branch into master. It shouldn't have as the code run by karma in that branch are identical before and after merging to master.

I looked into the stdout, and found this error:

11 04 2019 23:34:59.265:WARN [HeadlessChrome 70.0.3538 (Linux 0.0.0)]: Disconnected (0 times)reconnect failed before timeout of 60000ms (ping timeout)
HeadlessChrome 70.0.3538 (Linux 0.0.0) ERROR

To surprise me, karma-parallel didn't count it as a failure.

And after that, my TOTAL test cases was altered mysteriously.

TOTAL: 282 SUCCESS

HeadlessChrome 70.0.3538 (Linux 0.0.0): Executed 140 of 140 SUCCESS (3 mins 16.315 secs / 3 mins 18.967 secs)
HeadlessChrome 70.0.3538 (Linux 0.0.0): Executed 1 of 1 SUCCESS (0.002 secs / 0.002 secs)
HeadlessChrome 70.0.3538 (Linux 0.0.0): Executed 141 of 141 SUCCESS (5 mins 7.221 secs / 5 mins 9.639 secs)
TOTAL: 282 SUCCESS

I should have more than 450 test cases in total.

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

    • Try to create a test case that stop a browser instance with a debugger keyword and open the browser instance's Chrome Debugger so that the browser is effectively disconnected with no response; Or, you may also stop it by creating a very long running time test case (use the third argument to jasmine it function)
  • What is the expected behavior?

    • Test cases should fail
    • Total test cases shouldn't change
  • Please tell us about your environment:

    • version: 2.0.0-beta.X
    • Browser: [ChromeHeadless 70.0.3538 in Docker with --no-sandbox]
    • Language: [TypeScript 2.6]
    • karma config: browserDisconnectTolerance:5
    frameworks: ['parallel', 'jasmine', '@angular/cli'],
    plugins: [
        require('karma-parallel'),
        require('karma-jasmine'),
        require('karma-chrome-launcher'),
        require('karma-remap-istanbul'),
        require('karma-coverage-istanbul-reporter'),
        require('@angular/cli/plugins/karma'),
        require('karma-spec-reporter'),
    ],
    parallelOptions: {
      executors: 3, // Defaults to cpu-count - 1
      shardStrategy: 'round-robin'
    },
    customLaunchers: {
        ChromeCustom: {
          base: 'ChromeHeadless',
          flags: ['--no-sandbox']
        }
      },
    browsers: ['ChromeCustom'],
    singleRun: true,
    sourcemaps: true,
    logLevel: config.LOG_INFO,
    reporters: ['spec']
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

Sharding running same test multiple times in a single run

  • **I'm submitting a ... **

    • bug report
    • feature request
  • Do you want to request a feature or report a bug?
    Feature.

  • What is the current behavior?
    I ❤️ karma-parallel; though when I run a focused spec (with an fdescribe or the like), it takes longer than if it were to run just across a single executor because it runs tests more than one time.

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
    I ran an fdescribe in jasmine that ultimately had 4 specs. Rather than running just the 4 (I think I have 7 executors based on my cores) it ran 10 specs. Perhaps a rounding in the shard strategy?

HeadlessChrome 0.0.0 (Mac OS X 10.13.6): Executed 1 of 239 (skipped 238) SUCCESS (0.536 secs / 0.012 secs)
HeadlessChrome 0.0.0 (Mac OS X 10.13.6): Executed 1 of 555 (skipped 554) SUCCESS (1.154 secs / 0.013 secs)
HeadlessChrome 0.0.0 (Mac OS X 10.13.6): Executed 1 of 393 (skipped 392) SUCCESS (0.747 secs / 0.013 secs)
HeadlessChrome 0.0.0 (Mac OS X 10.13.6): Executed 1 of 372 (skipped 371) SUCCESS (0.804 secs / 0.012 secs)
HeadlessChrome 0.0.0 (Mac OS X 10.13.6): Executed 4 of 350 (skipped 346) SUCCESS (0.871 secs / 0.207 secs)
HeadlessChrome 0.0.0 (Mac OS X 10.13.6): Executed 1 of 387 (skipped 386) SUCCESS (0.785 secs / 0.014 secs)
HeadlessChrome 0.0.0 (Mac OS X 10.13.6): Executed 1 of 683 (skipped 682) SUCCESS (1.396 secs / 0.009 secs)
TOTAL: 10 SUCCESS
  • What is the expected behavior?
    One of two things; either the superfluous spec on the other 6 executors shouldn't run anything, or I'd expect that it spread it out over 4 executors and the remaining 3 would be idle.

Either that or it'd be nice to set a threshold in the configuration where it wouldn't parallelize the specs at all. For example:

parallelOptions: {
  shardThreshold: 50 /* specify anything 50 and below wouldn't parallelize at all
}
  • Please tell us about your environment:
  • version: 0.2.9
  • Browser: all
  • Language: all
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

TypeError when running karma in singleRun:false and aggregating reporters

Note: for support questions, please use stackoverflow. This repository's issues are reserved for feature requests and bug reports.

  • **I'm submitting a ... **

    • bug report
  • What is the current behavior?

If I run with singleRun:false and am aggregating a reporter, like junit or istaunbul; then on sequential runs I receive a TypeError.

21 02 2018 10:24:57.816:ERROR [karma]: TypeError: Cannot read property 'HeadlessChrome 0.0.0 (Mac OS X 10.13.3)' of null
    at AggregatedCoverageReporter.onBrowserStart (../node_modules/karma-parallel/lib/reporter.js:103:29)
    at Server.<anonymous> (../node_modules/karma/lib/events.js:13:22)
    at emitTwo (events.js:131:20)
    at Server.emit (events.js:214:7)
    at Browser.onStart (../node_modules/karma/lib/browser.js:133:13)
    at Socket.<anonymous> (../node_modules/karma/lib/events.js:13:22)
    at emitOne (events.js:116:13)
    at Socket.emit (events.js:211:7)
    at ../node_modules/karma/node_modules/socket.io/lib/socket.js:513:12
    at _combinedTickCallback (internal/process/next_tick.js:131:7)
    at process._tickCallback (internal/process/next_tick.js:180:9)
  • What is the expected behavior?

It should not have this error. It should be able to run multiple times without failing.

  • What is the motivation / use case for changing the behavior?

To allow tdd while aggregating reporters.

  • Please tell us about your environment:
  • version: 0.2.4

Support Karma 4

  • **I'm submitting a ... **

    • bug report
    • [x ] feature request
  • Do you want to request a feature or report a bug?
    Feature

  • What is the current behavior?
    Karma 4 throws errors around havebeenfound...
    TypeError: expect(...).toHaveFound is not a function
    TypeError: expect(...).toHaveFoundOne is not a function
    Every test with these calls is failing

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

  • What is the expected behavior?
    Karma 4 support

  • Please tell us about your environment:

  • version: 2.0.0-beta.X
  • Browser: [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
  • Language: [all | TypeScript X.X | ES6/7 | ES5 | Dart]
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

afterAll() is executed in all parallel executors insted of being executed only in the executor running the spec

**I'm submitting a ... **

  • bug report
  • feature request
  • support request => Please do not submit support request here, see note at the top of this template.

What is the current behavior?
When config.parallelOptions.executors is > 1 the afterAll() functions of tests are called in all executors.
Adding a console.log in the afterAll will confirm this behavior.

What is the expected behavior?
afterAll() function should be called only for one of the executors in which the whole spec is executed.

What is the motivation / use case for changing the behavior?
We have a few tests that do cleanup in afterAll() but when run in parallel they started failing with TypeError:

"An error was thrown in afterAll\nTypeError: Cannot read property 'destroy' of undefined\n"

After debugging the issue it tuns out that the beforeAll for those tests was not called in most executors but one.

Also it is worth mentioning that this issue remained hidden for some time before older versions of Jasmine did not fail the tests if exception was thrown in the afterAll().
We recently bumped Jasmine to 2.8.0 and than this issue manifested itself.

Please tell us about your environment:

  • version: 0.2.4
  • Jasmine 2.8.0, Karma 2.0.0
  • Browser: [Chrome 64.0.3282.186]
  • Language: [TypeScript 2.0]
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

It seems this was a copy-paste issue because in:

ctx.before = wrap(ctx.before, false, false, false);
ctx.beforeAll = wrap(ctx.beforeAll, false, false, false);
ctx.beforeEach = wrap(ctx.beforeEach, false, false, false);
ctx.beforeAll = wrap(ctx.beforeAll, false, false, false);
ctx.after = wrap(ctx.after, false, false, false);
ctx.afterEach = wrap(ctx.afterEach, false, false, false);

beforeAll is duplicated and afterAll is missing. I tried doing the fix locally and can confirm that is adding beforeAll fixes the issue.

Random test failures during CI process

  • **I'm submitting a ... **

    • bug report
    • feature request
  • Do you want to request a feature or report a bug?
    Not sure this is a bug, just need some information.

  • What is the current behavior?
    When running a jasmine test suite in Amazon CodeBuild we get random failures. There are no specific test failures, but the HeadlessChrome instance does disconnect midway through the test suite. This does not happen when running the test from a developer machine.

HeadlessChrome 77.0.3865 (Linux 0.0.0) ERROR

623 | Disconnected, because no message in 30000 ms.
624 | ·[1A·[2K·[1A·[2K·[1A·[2KHeadlessChrome 77.0.3865 (Linux 0.0.0) ERROR
625 | Disconnected, because no message in 30000 ms.
626 | HeadlessChrome 77.0.3865 (Linux 0.0.0): Executed 95 of 95 DISCONNECTED (2 mins 16.668 secs / 1 min 46.294 secs)
627 | HeadlessChrome 77.0.3865 (Linux 0.0.0): Executed 117 of 117 SUCCESS (1 min 38.204 secs / 1 min 37.86 secs)
628 | HeadlessChrome 77.0.3865 (Linux 0.0.0): Executed 100 of 147 SUCCESS (0 secs / 2 mins 12.715 secs)
629 | HeadlessChrome 77.0.3865 (Linux 0.0.0) ERROR

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

  • What is the expected behavior?

  • Please tell us about your environment:

  • version: 2.0.0-beta.X
  • Browser: [Chrome 80 ]
  • Language: [TypeScript 3.2.4 ]
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

Karma Conf:
module.exports = function(config) {
config.set({
basePath: '',
frameworks: ['parallel', 'jasmine', '@angular-devkit/build-angular'],
plugins: [
require('karma-jasmine'),
require('karma-chrome-launcher'),
require('karma-jasmine-html-reporter'),
require('karma-coverage-istanbul-reporter'),
require('@angular-devkit/build-angular/plugins/karma'),
require('karma-junit-reporter'),
require('karma-spec-reporter'),
require('karma-parallel')
],
parallelOptions: {
executors: 3, // Defaults to cpu-count - 1
shardStrategy: 'round-robin'
},
client: {
clearContext: false // leave Jasmine Spec Runner output visible in browser
},
coverageIstanbulReporter: {
dir: require('path').join(__dirname, '../coverage'),
reports: ['html', 'lcovonly'],
fixWebpackSourcePaths: true
},
angularCli: {
environment: 'dev'
},
reporters: ['junit', 'coverage-istanbul', 'spec', 'progress', 'kjhtml'],
specReporter: {
suppressSkipped: true
},
junitReporter: {
outputDir: './junitReport',
suite: '',
useBrowserName: false
},
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['Chrome', 'HeadlessChrome'],
customLaunchers: {
HeadlessChrome: {
base: 'Chrome',
flags: ['--headless', '--disable-gpu', '--remote-debugging-port=9222', '--no-sandbox']
}
},
singleRun: false,
failOnSkippedTests: true
});
};

Jasmine test timeout is ignored

  • **I'm submitting a ... **

    • bug report
    • feature request
  • Do you want to request a feature or report a bug?

Report a bug

  • What is the current behavior?

When a Jasmine test has a test timeout like this:

it("should fail quickly", () => {
  return new Promise((res, rej) => {});
}, 1);

The resulting failure uses the default Jasmine timeout of 5000 ms instead of the specified timeout of 1 ms.

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

The problem is that the wrapping function ignores any parameter after the test definition:

https://github.com/joeljeske/karma-parallel/blob/master/lib/karma-parallelizer.js#L111

I can try to prepare a repro if needed.

  • What is the expected behavior?

The custom timeout specified for the test should be used.

  • Please tell us about your environment:
  • version: 0.3.1
  • Browser: all
  • Language: all
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

The suggested fix is to pass along all arguments (extracted via arguments perhaps) to the wrapped function.

Developing Code Coverage Support.

  • **I'm submitting a ... **

    • feature request
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

Right now it is document that code coverage support currently isn't working and that a PR can be opened. I'd be highly interested in getting it functioning but have never looked much at karma plugin development before.

Before I begin working on this do you have any ideas why it might not be functioning and could you provide some insight as to how I might fix it?

No calculation of all passed tests on kubernetis -job

  • **I'm submitting a ... **

    • bug report
    • feature request
  • Do you want to request a feature or report a bug?
    I expect that all passed test are shown as number, I have around 2k tests

  • What is the current behavior?
    I have a disconnection during run, the test seems to be green, but there is no total calculation of passed tests


`26 05 2020 14:35:37.039:DEBUG [HeadlessChrome 77.0.3865 (Linux 0.0.0)]: Disconnected during run, waiting 100000ms for reconnecting.
26 05 2020 14:35:37.039:DEBUG [HeadlessChrome 77.0.3865 (Linux 0.0.0)]: EXECUTING -> EXECUTING_DISCONNECTED
26 05 2020 14:37:12.015:WARN [HeadlessChrome 77.0.3865 (Linux 0.0.0)]: Disconnected (0 times)reconnect failed before timeout of 100000ms (transport close)
26 05 2020 14:37:12.015:DEBUG [HeadlessChrome 77.0.3865 (Linux 0.0.0)]: EXECUTING_DISCONNECTED -> DISCONNECTED
HeadlessChrome 77.0.3865 (Linux 0.0.0) ERROR
Disconnectedreconnect failed before timeout of 100000ms (transport close)
26 05 2020 14:37:12.016:INFO [karma-server]: Restarting HeadlessChrome 77.0.3865 (Linux 0.0.0) (1 of 3 attempts)
26 05 2020 14:37:12.017:DEBUG [launcher]: CAPTURED -> RESTARTING
26 05 2020 14:37:12.022:DEBUG [launcher]: Process ChromeHeadless exited with code null and signal SIGTERM
26 05 2020 14:37:12.023:DEBUG [temp-dir]: Cleaning temp dir /tmp/karma-10228557
26 05 2020 14:37:12.201:DEBUG [launcher]: RESTARTING -> FINISHED
26 05 2020 14:37:12.202:DEBUG [launcher]: Restarting ChromeHeadless
26 05 2020 14:37:12.202:DEBUG [launcher]: FINISHED -> BEING_CAPTURED
26 05 2020 14:37:12.202:DEBUG [temp-dir]: Creating temp dir at /tmp/karma-10228557

HeadlessChrome 77.0.3865 (Linux 0.0.0): Executed 1 of 1 SUCCESS (0.009 secs / 0.002 secs)
HeadlessChrome 77.0.3865 (Linux 0.0.0): Executed 1 of 1 SUCCESS (0.059 secs / 0.017 secs)
HeadlessChrome 77.0.3865 (Linux 0.0.0): Executed 1 of 1 SUCCESS (0.019 secs / 0.002 secs)
TOTAL: 3 SUCCESS`

  • Please tell us about your environment:
  • version: latest
  • Browser: [Chrome headless 77.x]
  • Language: [ES6/7]

karma.conf
`
'use strict';

// use default configuration as a base
var shared = require('./karma.conf.js');

// override base configuration
module.exports = function (config) {

shared(config);

config.set({

    plugins: [
        ...config.plugins,
    ],

    // web server port
    port: 32080,

    browsers: ['ChromeHeadlessNoSandbox'],
    customLaunchers: {
        ChromeHeadlessNoSandbox: {
            base: 'ChromeHeadless',
            flags: ['--no-sandbox',
                    '--disable-dev-shm-usage'
            ]
        }
    },
    captureTimeout: 210000,
    browserDisconnectTimeout: 100000,
    browserNoActivityTimeout: 400000,
    browserDisconnectTolerance: 3,
    singleRun: true,
    sourcemap:false,
    colors: true,
    exclude: [],
    preprocessors: [],
    frameworks: ['parallel', 'jasmine', 'jasmine-matchers']
});

};`

No support for Circle CI

  • **I'm submitting a ... **

    • feature request
  • Do you want to request a feature or report a bug? feature

  • What is the current behavior?

The karma-parallel package fails to execute on the Circle CI 2.0

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

I am using the angular 7 project. Update your karma.conf.js file as per the karma-parallel documentation and then raise a PR and build your project using Circle CI. The parallelOptions.executors value should be > 1 (I took an example as 2)

  • What is the expected behavior?

Should run the tests in parallel and reduce the build time

Looking for fixes/support for Circle CI

Script Error while executing karma tests

While Executing karma tasks, I get Script Error

Please see the below.

Running "karma:test.accounting" (karma) task
08 06 2020 20:39:55.104:INFO [framework:karma-parallel]: sharding specs across 2 browsers
08 06 2020 20:39:56.098:INFO [karma]: Karma v0.13.22 server started at http://localhost:9876/
08 06 2020 20:39:56.102:INFO [launcher]: Starting browser Chrome
08 06 2020 20:39:56.126:INFO [launcher]: Starting browser Chrome
08 06 2020 20:39:57.042:INFO [HeadlessChrome 83.0.4103 (Windows 10.0.0)]: Connected on socket 1QBL34ljfhxVapqsAAAA with id 88182158
HeadlessChrome 83.0.4103 (Windows 10.0.0) ERROR
Script error.

08 06 2020 20:39:58.876:INFO [HeadlessChrome 83.0.4103 (Windows 10.0.0)]: Connected on socket yNmXYId4QGkY8bGGAAAB with id 18694400
HeadlessChrome 83.0.4103 (Windows 10.0.0) ERROR
Script error.

Warning: Task "karma:test.accounting" failed. Use --force to continue.

Below is my karma config file.

frameworks: [ 'parallel', 'jasmine', 'jasmine-matchers' ],

parallelOptions: { executors: 2, shardStrategy: 'round-robin' },

Need help to get code coverage

  • **I'm submitting a ... **

    • bug report
    • feature request
  • Do you want to request a feature or report a bug?
    Need your help to share the sample karma.conf.js to get aggregated code coverage while using karma-parallel.

  • What is the current behavior?
    I tried the suggest parameters in karma.conf.js. But I am getting code coverage of last thread only. aggregated option is not working. Please suggest.

  • Please tell us about your environment:

  • version: Used Latest Version of karma-parallel
  • Browser: Chrome
  • Language: Angular 7
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

Hi

  • **I'm submitting a ... **

    • bug report
    • feature request
  • Do you want to request a feature or report a bug?

  • What is the current behavior?

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

  • What is the expected behavior?

  • Please tell us about your environment:

  • version: 2.0.0-beta.X
  • Browser: [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
  • Language: [all | TypeScript X.X | ES6/7 | ES5 | Dart]
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

Disconnect timeouts occurring for multiple executors - OK for single executor

  • bug report
  • feature request
  • support request => Please do not submit support request here, see note at the top of this template.
  • What is the current behavior?

I'm seeing some odd behavior that I can't explain.

My tests pass fine with a single executor.

With multiple executors (and a test suite of 3000+ mocha tests), I'm noticing that sometimes the browsers hang and tests fail due to no the Karma no activity timeout.

I also noticed that when multiple chrome instances are running, the console output stalls until I focus one of them, then it begins spewing output again.

Is this a known issue? Or is there a Karma config that can prevent this?

  • What is the expected behavior?

Tests should pass in shared environment the same way as non-sharded.

  • Please tell us about your environment:
  • version: 0.2.5
  • Browser: [Chrome 57-65]
  • Language: [all]

And the relevant portion of the karma config:

  frameworks: ['parallel', 'mocha-debug', 'mocha'],

    parallelOptions: {
      executors: 2, // Defaults to cpu-count - 1
      shardStrategy: 'round-robin',
    },

Getting an error after Basic setup about unexpected token % in JSON at position 0

  • **I'm submitting a ... **

    • bug report
    • feature request
  • Do you want to request a feature or report a bug?
    I would like to report a bug.

  • What is the current behavior?
    Currently, after installing the package as the dev dependency and setting up everything in karma.config file as suggested in the docs, and after running the tests, I get the following error message:
    HeadlessChrome 67.0.3396 (Mac OS X 10.13.3) ERROR
    Uncaught SyntaxError: Unexpected token % in JSON at position 0
    at http://localhost:9876/context.html:1

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
    After basic setup - running the karma tests I get the error 'Unexpected token % in JSON at position 0'

  • What is the expected behavior?
    No errors

  • Please tell us about your environment:

  • version: 5.2.9
  • Browser: Chrome Headless
  • Language: TypeScript 2.3.4
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)
    Thank you for looking into this. Please do ask if I can give any additional information that can help to resolve this issue.

Aggregated Log Output

I'm submitting a ...

  • bug report
  • feature request

What is the current behavior?
When using the brief reporter, none of the output is aggregated. It showed the output for 1 of the 6 exectuters, not all

I tried the option: aggregatedReporterTest: [(reporter)=>regex=/coverage|brief|istanbul|junit/i]

And other permutations of the reporter name

If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

What is the expected behavior?
All output should be aggregated, so that I can see the end result for ALL instances

Please tell us about your environment:

  • version:
    "karma-parallel": "^0.3.1",
    "karma-brief-reporter": "0.0.7",

  • Browser: [ Chrome XX ]

  • Language: [TypeScript X.X | ES6/7 ]
    Node v10.15.3
    NPM v6.4.1

Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

Karma Config I was using

module.exports = function (config) {
    config.set({
        basePath: "",
        frameworks: ["parallel", "jasmine", "@angular-devkit/build-angular"],
        plugins: [
            require("karma-jasmine"),
            require("karma-chrome-launcher"),
            require("karma-brief-reporter"),
            require("@angular-devkit/build-angular/plugins/karma"),
            require("karma-parallel")
        ],
        reporters: ["brief"],
        port: 1234,
        colors: true,
        logLevel: config.LOG_WARN,
        autoWatch: true,
        browsers: ["ChromeHeadless"],
        singleRun: false,

        parallelOptions: {
            executors: (Math.ceil(require('os').cpus().length / 2)),
            shardStrategy: 'round-robin',
            aggregatedReporterTest: [(reporter)=>regex=/coverage|brief|istanbul|junit/i]
        },

        briefReporter: {
            suppressBrowserLogs: true
        }
    });
};

Only supporting Jasmine ?

  • **I'm submitting a ... **

    • [ x] feature request
  • Do you want to request a feature or report a bug?
    feature

  • What is the current behavior?
    Does not support qunit only jasmine

  • What is the expected behavior?
    Support other unit test frameworks.

this.timeout fails inside describe block

Opening new issue as requested:

  • **I'm submitting a ... **

    • bug report
  • What is the current behavior?
    Context not enforced inside describe blocks
    this is incorrect

  • What is the expected behavior?

  • What is the motivation / use case for changing the behavior?
    this.timeout(<ms>) does not work

  • Please tell us about your environment:

  • version: 2.0.0-beta.X
  • Browser: [Chrome 57]
  • Language: [all]

We commonly call this.timeout inside the beginning of the root describe in a test file:

describe('some_module/some_component', function() {
  this.timeout(10000);

  // Setup and run some tests
  ...
});

Here is an example output:

➜  myapp git:(karma_parallel) grunt test:myapp:unit
Running "test:myapp:unit" (test) task

Running "karma:myapp:unit" (karma) task

START:
Hash: e0bdc8cc97632b01d813
Version: webpack 2.4.1
Time: 44ms
webpack: Compiled successfully.
webpack: Compiling...
Hash: e0fab60a728ee52c6d84
Version: webpack 2.4.1
Time: 21798ms
                     Asset     Size  Chunks                    Chunk Names
               0.bundle.js   273 kB       0  [emitted]  [big]  extras_bundle
               1.bundle.js   110 kB       1  [emitted]         stub_data_bundle
             firstparty.js  6.97 MB       2  [emitted]  [big]  firstparty
src/js/tests/myapp/main.js  1.79 MB       3  [emitted]  [big]  src/js/tests/myapp/main.js
                 vendor.js  8.22 MB       4  [emitted]  [big]  vendor

webpack: Compiled with warnings.
Chrome 63.0.3239 (Mac OS X 10.12.6) ERROR
  Uncaught TypeError: this.timeout is not a function
  at src/js/tests/myapp/main.js:287
Chrome 63.0.3239 (Mac OS X 10.12.6) ERROR
  Uncaught TypeError: this.timeout is not a function
  at src/js/tests/myapp/main.js:356

Finished in 2.413 secs / 0 secs

SUMMARY:
✔ 0 tests completed
Warning: Task "karma:myapp:unit" failed. Use --force to continue.

Aborted due to warnings.

And the relevant portion of the karma config:

  frameworks: ['parallel', 'mocha-debug', 'mocha'],

    parallelOptions: {
      executors: 2, // Defaults to cpu-count - 1
      shardStrategy: 'round-robin',
    },

How to enable running tests in parallel on CI

How to enable running tests in parallel on CI? I am running tests with 2 executors, but got one...
This is my karma config: ``` module.exports = function (config) {
config.set({
basePath: '',
frameworks: ['parallel', 'jasmine', '@angular-devkit/build-angular'],
plugins: [
require('karma-jasmine'),
require('karma-chrome-launcher'),
require('karma-jasmine-html-reporter'),
require('karma-coverage-istanbul-reporter'),
require('@angular-devkit/build-angular/plugins/karma'),
require('karma-junit-reporter'),
require('karma-parallel'),
],
parallelOptions: {
executors: 2,
shardStrategy: 'round-robin'
},
client: {
clearContext: false, // leave Jasmine Spec Runner output visible in browser
},
captureTimeout: 60000,
browserDisconnectTolerance: 1,
browserDisconnectTimeout : 60000,
browserNoActivityTimeout : 60000,
mime: {
'text/x-typescript': ['ts', 'tsx']
},
coverageIstanbulReporter: {
dir: require('path').join(__dirname, 'testOutput/coverage'),
reports: ['html', 'lcovonly', 'cobertura'],
fixWebpackSourcePaths: true
},
angularCli: {
environment: 'dev',
codeCoverage: true
},
reporters: config.angularCli && config.angularCli.codeCoverage ?
['dots', 'junit', 'progress', 'coverage-istanbul'] :
['dots', 'junit', 'progress', 'kjhtml'],
junitReporter: {
outputFile: require('path').join(__dirname, 'testOutput/test-results.xml'),
},
port: 9876,
colors: true,
logLevel: config.LOG_INFO,
autoWatch: true,
browsers: ['ChromeHeadless'],
singleRun: false
});
};

Support for Chrome Headless

I'm submitting a ...

  • bug report
  • feature request

What is the current behavior?

I'm able to use the plugin with Chrome as a browser, but when I want to execute a test run for my production environment, I'm getting errors and I suspect it might be related to ChromeHeadless:

ng test --watch=false --code-coverage

15 08 2018 22:05:09.113:INFO [framework:karma-parallel]: sharding specs across 3 browsers
 10% building modules 1/1 modules 0 active(node:3056) DeprecationWarning: Tapable.plugin is deprecated. Use new API on `.hooks` instead
15 08 2018 22:05:27.111:INFO [karma]: Karma v2.0.5 server started at http://0.0.0.0:9876/
15 08 2018 22:05:27.112:INFO [launcher]: Launching browsers ChromeHeadless, ChromeHeadless, ChromeHeadless with unlimited concurrency
15 08 2018 22:05:27.124:INFO [launcher]: Starting browser ChromeHeadless
15 08 2018 22:05:27.147:INFO [launcher]: Starting browser ChromeHeadless
15 08 2018 22:05:27.168:INFO [launcher]: Starting browser ChromeHeadless                                             15 08 2018 22:06:03.731:INFO [HeadlessChrome 0.0.0 (Windows 10 0.0.0)]: Connected on socket jGL78oXECkDIKtDSAAAA with id 19104534
15 08 2018 22:06:03.742:INFO [HeadlessChrome 0.0.0 (Windows 10 0.0.0)]: Connected on socket hcmeF4wVyyZHpFZVAAAB with id 36435148
15 08 2018 22:06:04.103:INFO [HeadlessChrome 0.0.0 (Windows 10 0.0.0)]: Connected on socket QLGOZuBQT0iz5YnTAAAC with id 64640218
15 08 2018 22:06:33.736:WARN [HeadlessChrome 0.0.0 (Windows 10 0.0.0)]: Disconnected (1 times), because no message in 30000 ms.
HeadlessChrome 0.0.0 (Windows 10 0.0.0) ERROR
  Disconnected, because no message in 30000 ms.
HeadlessChrome 0.0.0 (Windows 10 0.0.0) ERROR
  Disconnected, because no message in 30000 ms.

HeadlessChrome 0.0.0 (Windows 10 0.0.0) ERROR
  Disconnected, because no message in 30000 ms.

15 08 2018 22:06:33.743:WARN [HeadlessChrome 0.0.0 (Windows 10 0.0.0)]: Disconnected (1 times), because no message in 30000 ms.
HeadlessChrome 0.0.0 (Windows 10 0.0.0) ERROR
  Disconnected, because no message in 30000 ms.
HeadlessChrome 0.0.0 (Windows 10 0.0.0) ERROR
  Disconnected, because no message in 30000 ms.

HeadlessChrome 0.0.0 (Windows 10 0.0.0) ERROR
  Disconnected, because no message in 30000 ms.

15 08 2018 22:06:34.156:WARN [HeadlessChrome 0.0.0 (Windows 10 0.0.0)]: Disconnected (1 times), because no message in 30000 ms.
HeadlessChrome 0.0.0 (Windows 10 0.0.0) ERROR
  Disconnected, because no message in 30000 ms.
HeadlessChrome 0.0.0 (Windows 10 0.0.0) ERROR
  Disconnected, because no message in 30000 ms.

HeadlessChrome 0.0.0 (Windows 10 0.0.0) ERROR
  Disconnected, because no message in 30000 ms.

15 08 2018 22:06:35.744:WARN [launcher]: ChromeHeadless was not killed in 2000 ms, sending SIGKILL.
15 08 2018 22:06:35.751:WARN [launcher]: ChromeHeadless was not killed in 2000 ms, sending SIGKILL.
15 08 2018 22:06:36.214:WARN [launcher]: ChromeHeadless was not killed in 2000 ms, sending SIGKILL.
15 08 2018 22:06:37.784:WARN [launcher]: ChromeHeadless was not killed by SIGKILL in 2000 ms, continuing.
15 08 2018 22:06:37.875:WARN [launcher]: ChromeHeadless was not killed by SIGKILL in 2000 ms, continuing.

What is the expected behavior?

It should be able to run tests in any headless environment since headless browsers have been supported for a while in Karma's ecosystem.

Please tell us about your environment:

  • version: 0.2.9
  • Browser: Chrome, Chrome headless
  • Language: TypeScript 2.9.2

Other information

Need to run same piece of code from js file on two emulators with different configuration

Note: for support questions, please use stackoverflow. This repository's issues are reserved for feature requests and bug reports.

  • **I'm submitting a ... **

    • bug report
    • feature request
    • support request => Please do not submit support request here, see note at the top of this template.
  • Do you want to request a feature or report a bug?

  • What is the current behavior?

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem via
    https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5).

  • What is the expected behavior?

  • What is the motivation / use case for changing the behavior?

  • Please tell us about your environment:

  • version: 2.0.0-beta.X
  • Browser: [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
  • Language: [all | TypeScript X.X | ES6/7 | ES5 | Dart]
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

`Parallel` throws away global beforeEach in multi-browser mode

  • **I'm submitting a ... **

    • bug report
    • feature request
  • What is the current behavior?
    If I have some global beforeEach or afterEach methods defined in none of the describe block, in multi-browser mode, some browser instances will ignore such beforeEach or afterEach, as they will do if they were wrapped in some describe block. But when I didn't wrap beforeEach or afterEach in any describe block I expected that they will run in each test runs in each browser.

  • What is the expected behavior?
    global beforeEach or afterEach (not wrapped in any describe block) runs in each test run in each browser instance

  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

My workaround:

Karma config:


module.exports = function (config) {
  config.set({
    basePath: '',
    frameworks: ['parallel', 'jasmine', '@angular-devkit/build-angular'],
    plugins: [
      require('karma-parallel'),
      require('karma-jasmine'),
      require('karma-chrome-launcher'),
      require('karma-jasmine-html-reporter'),
      require('karma-coverage-istanbul-reporter'),
      require('@angular-devkit/build-angular/plugins/karma'),
    ],
    parallelOptions: {
      shardStrategy: 'custom',
      customShardStrategy: function (config) {
        window.parallelDescribeCount = window.parallelDescribeCount || 0;
        window.parallelDescribeCount++;

        if (
          config.description === 'important beforeEach or afterEach go next!' // <===== there
        ) {
          return true;
        }

        return (
          window.parallelDescribeCount % config.executors === config.shardIndex
        );
      },
    },
  });
};

some-test-helper-with-global-beforeEach:

describe('important beforeEach or afterEach go next!', () => {
  beforeAll(() => {
    console.log('important describe');
  });
});

// beforeEach and afterEach go after  'important beforeEach or afterEach go next!' description. If i wrap it in description
// karma will ignore this beforeEach and afterEach on every other describe blocks 
beforeEach(() => {
.....
});
afterEach(() => {
....
});

Executors and test maximums Question

  • **I'm submitting a ... **

    • bug report
    • feature request
    • question
  • Do you want to request a feature or report a bug?

  • What is the current behavior?
    We currently have 1600+ mocha tests. I noticed when running the default executor pattern cores - 1 with chrome headless browser, not all tests get ran. Its only when i bump the executor count to 6 that i get the count over 1600, but then i do also notice this plugin seems to add an additional 15-20 tests to the run count, which i can only assume is from it having to keep the browser alive or something.

My question is: Is there a relationship between what the browser can handle and far as test count limits and the executor pattern needed? Does it warrant a new executor pattern that can figure total test count/batch to make it all work better. I dont love the thought that i would have to keep track of tests counts and go update the executor count to ensure they all can run. This i would think will eventually not work anyway, 3000+ tests means what 10+ executors-ish not even sure at that point what server requirements would be needed to account for that high of a test count..

This plugin was used because karma-mocha already seems to hit the limit of tests ran and we get capped around 950 tests that ever run without this plugin..

Any insight or guidance would be great..

  • version:

"karma": "3.1.4",
"karma-chrome-launcher": "2.2.0",
"karma-coverage": "1.1.2",
"karma-coverage-istanbul-reporter": "2.0.4",
"karma-es6-shim": "1.0.0",
"karma-intl-shim": "1.0.3",
"karma-mocha": "1.3.0",
"karma-mocha-reporter": "2.2.5",
"karma-parallel": "0.3.1",
"karma-sinon-chai": "2.0.2",
"karma-sourcemap-loader": "0.3.7",
"karma-spec-reporter": "0.0.32",
"karma-webpack": "3.0.5",

  • Browser:
    Google Chrome is up to date
    Version 71.0.3578.98 (Official Build) (64-bit)

ChromeHeadless: {
base: 'Chrome',
flags: [
'--headless',
'--disable-gpu',
'--log-level=3',
'--disable-web-security',
'--disable-search-geolocation-disclosure',
// Without a remote debugging port, Google Chrome exits immediately.
'--remote-debugging-port=9222',
],
}

  • Language: Typescript

Failure specs - getting async call back errors

I'm getting error for different specs each time by running ng test.

Error thrown in console:
Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.

configuration:
parallelOptions: {
executors: 2,
shardStrategy: 'round-robin'
},

note: Also if i'm increasing executors , specs failures are also increasing

Cannot focus specs properly

  • I'm submitting a ...

    • bug report
    • feature request
    • support request => Please do not submit support request here, see note at the top of this template.
  • Do you want to request a feature or report a bug?
    report a bug

  • What is the current behavior?
    focused specs are not respected in every browser instance

  • Steps to reproduce

  1. focus a "describe" or "it" block in any test
  • What is the expected behavior?
    All running browsers respect the focused spec (probably should just let the one browser with the focused spec run its tests)

  • What is the motivation / use case for changing the behavior?
    Without this functionality, we cannot test subsets of our tests. (there are over 1400 in my particular case)

  • Please tell us about your environment:

  • version: 0.1.1
  • Browser: Chrome 63 (Headless)
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

The image below shows that there are 4 tests focused in one of the browsers running the tests, but the other browsers continue to run their subset of the tests unaffected.

image

Karma parallel cannot find any tests after starting Chrome

  • Do you want to request a feature or report a bug?
  • bug report
  • What is the current behavior?
    The following error is thrown:
Cannot read property 'success' of undefined
TypeError: Cannot read property 'success' of undefined
    at TestCommand.runSingleTarget (C:\Users\ionut.irimia\Documents\Diod Selfcare\selfcare-portal-frontend\node_modules\@angular\cli\models\packages\angular\cli\models\architect-command.ts:242:21)
    at process._tickCallback (internal/process/next_tick.js:68:7)

image

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
    After running ng test --watch=false or ng-test or karma start karma.conf.js the browsers are starting but no test is found.

  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

This is my karma.conf.js

            basePath: '..',
            frameworks: ['parallel', 'jasmine' /* or 'jasmine' */],
            plugins: [
                require('karma-parallel'),
                require('karma-jasmine'),
                require('karma-chrome-launcher'),
                require('karma-junit-reporter'),
                require('karma-spec-reporter'),
                require('karma-coverage-istanbul-reporter'),
                require('@angular-devkit/build-angular/plugins/karma')
            ],
            client: {
                clearContext: false, // leave Jasmine Spec Runner output visible in browser
                captureConsole: false
            },
            junitReporter: {
                outputDir: 'reports/junit',
                outputFile: 'unit-test.xml',
                useBrowserName: false,
                suite: '' // Will become the package name attribute in xml testsuite element
            },
            coverageIstanbulReporter: {
                reports: ['html', 'lcovonly', 'text-summary'],
                dir: 'reports/coverage',
                fixWebpackSourcePaths: true,
                skipFilesWithNoCoverage: true
            },
            reporters: ['spec'],
            port: 9876,
            colors: true,
            logLevel: config.LOG_INFO,
            autoWatch: true,
            browsers: ['Chrome_headless'],
            customLaunchers: {
                Chrome_headless: {
                    base: 'Chrome', // for windows should be Chrome
                    flags: [
                        '--headless',
                        '--no-sandbox',
                        '--disable-gpu',
                        // Without a remote debugging port, Google Chrome exits immediately.
                        '--remote-debugging-port=9222',
                        '--allow-insecure-localhost'
                    ]
                }
            },
            singleRun: true,
            client: {
                jasmine: {
                    random: false
                }
            }
        };

Hi Can you please help how to restrict different service url requests in mobile or web browsers using karma and jamine

  • **I'm submitting a ... **

    • bug report
    • feature request
  • Do you want to request a feature or report a bug?

  • What is the current behavior?

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

  • What is the expected behavior?

  • Please tell us about your environment:

  • version: 2.0.0-beta.X
  • Browser: [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
  • Language: [all | TypeScript X.X | ES6/7 | ES5 | Dart]
  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

Support for custom shardStrategy

It would be amazing if there would be a way to pass in function that follows the same API as currently existing round-robin and description-length strategy so that we can customize how we want to separate tests.

Eg in legacy Angular / AngularJS portal there are different types of tests:

  1. AngularJS javascript
  2. AngularJS typescript
  3. Angular typescript

Also there are tests for multiple applications so it would be great if I could run them together (eg based on regexp for the path /src/applications/app1)

Would this be possible?

Incorrect option name in README.md

  • **I'm submitting a ... **

    • bug report
  • What is the current behavior?

In README.md under the Options section one of the options is named shardStyle.

  • What is the expected behavior?

The option should be named shardStrategy.

failSpecWithNoExpectations to true in jasmine and karma-parallel add the extra tests

  • **I'm submitting a ... **

    • [ X ] bug report
    • feature request
  • What is the current behavior?
    When jasmine configuration set the property failSpecWithNoExpectations to true (since v.3.5.0) and karma-parallel add extra tests explained in documentation:
    Why are there extra tests in my output?
    If this plugin discovers that you have focused some tests (fit, it.only, etc...) in other browser instances, it will add an extra focused test in the current browser instance to limit the running of the tests in the given browser. Similarly, when dividing up the tests, if there are not enough tests for a given browser, it will add an extra test to prevent karma from failing due to no running tests.__

The result: has failed tests.

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
  1. Set in karma.conf.js a jasmine configuration like:
    client: {
    jasmine: {
    failSpecWithNoExpectations: true
    },
    clearContext: false
    },
  2. Configure karma-parallel 4 executors
    parallelOptions: {
    executors: 4, // Defaults to cpu-count - 1
    shardStrategy: 'description-length'
    },
  3. Write only one test o focuse only one test (fit)
    4,. Execute tests (like 'ng test')
  • What is the expected behavior?
    Expect that pass the tests in all executors.

  • Please tell us about your environment:

Used Latest Version of karma-parallel 0.3.1
Jasmine version: ^3.8.0
Browser: Chrome
Language: Angular 12.2.0

  • Suggestions:
    May be add into the extra tests "expect().nothing()"?

Thanks in advance

Exception when using "karma run"

  • Do you want to request a feature or report a bug?

Bug

  • What is the current behavior?

Executing "karma run" command fails.

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

package.json

{
  "devDependencies": {
    "karma": "^4.0.1",
    "karma-parallel": "^0.3.1",
    "karma-jasmine": "^2.0.1",
    "karma-junit-reporter": "^1.2.0",
    "karma-chrome-launcher": "^2.2.0"
  }
}

karma.conf.js

// karma.conf.js
module.exports = function(config) {
  config.set({
    // NOTE: 'parallel' must be the first framework in the list
    frameworks: ['parallel', 'jasmine'],
    reporters: ['junit'],
    browsers: ['ChromeHeadless'],
    plugins: [
      require('karma-jasmine'),
      require('karma-chrome-launcher'),
      require('karma-junit-reporter'),
      require('karma-parallel')
    ]
  });
};

The issue can be reproduced even without tests.

  1. Run karma server in a terminal: ./node_modules/karma/bin/karma start
  2. In another terminal: ./node_modules/karma/bin/karma run

The error in the first terminal:

./node_modules/karma/bin/karma start
13 03 2019 00:55:27.477:INFO [framework:karma-parallel]: sharding specs across 11 browsers
13 03 2019 00:55:27.499:WARN [karma]: No captured browser, open http://localhost:9876/
13 03 2019 00:55:27.520:INFO [karma-server]: Karma v4.0.1 server started at http://0.0.0.0:9876/
13 03 2019 00:55:27.520:INFO [launcher]: Launching browsers ChromeHeadless, ChromeHeadless, ChromeHeadless, ChromeHeadless, ChromeHeadless, ChromeHeadless, ChromeHeadless, ChromeHeadless, ChromeHeadless, ChromeHeadless, ChromeHeadless with concurrency unlimited
13 03 2019 00:55:27.525:INFO [launcher]: Starting browser ChromeHeadless
13 03 2019 00:55:27.531:INFO [launcher]: Starting browser ChromeHeadless
13 03 2019 00:55:27.534:INFO [launcher]: Starting browser ChromeHeadless
13 03 2019 00:55:27.538:INFO [launcher]: Starting browser ChromeHeadless
13 03 2019 00:55:27.542:INFO [launcher]: Starting browser ChromeHeadless
13 03 2019 00:55:27.547:INFO [launcher]: Starting browser ChromeHeadless
13 03 2019 00:55:27.553:INFO [launcher]: Starting browser ChromeHeadless
13 03 2019 00:55:27.558:INFO [launcher]: Starting browser ChromeHeadless
13 03 2019 00:55:27.563:INFO [launcher]: Starting browser ChromeHeadless
13 03 2019 00:55:27.570:INFO [launcher]: Starting browser ChromeHeadless
13 03 2019 00:55:27.580:INFO [launcher]: Starting browser ChromeHeadless
13 03 2019 00:55:27.890:INFO [HeadlessChrome 72.0.3626 (Linux 0.0.0)]: Connected on socket gVlUptGuJOGShkK_AAAA with id 90761783
13 03 2019 00:55:27.898:INFO [HeadlessChrome 72.0.3626 (Linux 0.0.0)]: Connected on socket N_FtZRy2OqgHMkE6AAAB with id 80372819
13 03 2019 00:55:27.903:INFO [HeadlessChrome 72.0.3626 (Linux 0.0.0)]: Connected on socket UyvfpYyq21lsh05MAAAC with id 18730315
13 03 2019 00:55:27.918:INFO [HeadlessChrome 72.0.3626 (Linux 0.0.0)]: Connected on socket Y0g0sZMctnz8hQ1hAAAE with id 24106570
13 03 2019 00:55:27.920:INFO [HeadlessChrome 72.0.3626 (Linux 0.0.0)]: Connected on socket 3L6r8YfeA8DqM8frAAAD with id 9795099
13 03 2019 00:55:27.922:INFO [HeadlessChrome 72.0.3626 (Linux 0.0.0)]: Connected on socket CTwbSKeI6-wwrOnFAAAF with id 45471051
13 03 2019 00:55:27.926:INFO [HeadlessChrome 72.0.3626 (Linux 0.0.0)]: Connected on socket pS8so0QMW4ywdGzeAAAG with id 71035962
13 03 2019 00:55:27.928:INFO [HeadlessChrome 72.0.3626 (Linux 0.0.0)]: Connected on socket Al6UUyL_2InW-92tAAAH with id 89172988
13 03 2019 00:55:27.931:INFO [HeadlessChrome 72.0.3626 (Linux 0.0.0)]: Connected on socket 8AMWRMp6WX2CjOpKAAAI with id 60192581
13 03 2019 00:55:27.938:INFO [HeadlessChrome 72.0.3626 (Linux 0.0.0)]: Connected on socket ECeJ-c-z8S6vTowXAAAJ with id 23653732
13 03 2019 00:55:27.945:INFO [HeadlessChrome 72.0.3626 (Linux 0.0.0)]: Connected on socket HO08Wk7kNLs6mf9BAAAK with id 17973266
13 03 2019 00:55:32.808:ERROR [karma-server]: TypeError: Cannot read property 'push' of undefined
    at _reporters.forEach (/home/user/my-project/node_modules/karma/lib/reporters/multi.js:11:61)
    at Array.forEach (<anonymous>)
    at MultiReporter.addAdapter (/home/user/my-project/node_modules/karma/lib/reporters/multi.js:11:21)
    at Server.<anonymous> (/home/user/my-project/node_modules/karma/lib/middleware/runner.js:41:18)
    at Object.onceWrapper (events.js:273:13)
    at Server.emit (events.js:187:15)
    at Executor.schedule (/home/user/my-project/node_modules/karma/lib/executor.js:30:20)
    at Server.on (/home/user/my-project/node_modules/karma/lib/server.js:330:18)
    at Server.emit (events.js:187:15)
    at emit (/home/user/my-project/node_modules/karma/lib/file-list.js:29:21)
    at FileList._emitModified (/home/user/my-project/node_modules/karma/lib/file-list.js:34:19)
    at Promise.map.then (/home/user/my-project/node_modules/karma/lib/file-list.js:108:14)
    at tryCatcher (/home/user/my-project/node_modules/bluebird/js/release/util.js:16:23)
    at Promise._settlePromiseFromHandler (/home/user/my-project/node_modules/bluebird/js/release/promise.js:512:31)
    at Promise._settlePromise (/home/user/my-project/node_modules/bluebird/js/release/promise.js:569:18)
    at Promise._settlePromise0 (/home/user/my-project/node_modules/bluebird/js/release/promise.js:614:10)
  • What is the expected behavior?

It works without exceptions.

Please let me know if you need more information.

It would be nice to specify any tests suite/spec to be run in every browser instance

Note: for support questions, please use stackoverflow. This repository's issues are reserved for feature requests and bug reports.

  • **I'm submitting a ... **

    • bug report
    • feature request
    • support request => Please do not submit support request here, see note at the top of this template.
  • Do you want to request a feature or report a bug?
    feature

  • What is the current behavior?
    It would be nice to have possibility to run any specific test or tests suite/spec in every browser instance. Currently we have a test suite with name ~test-fixture.spec.ts and this spec is run the last if we use one browser because of naming. In this test suite we check console (not errors should be identified) and some other generic staff to check if our code is OK when all tests were run. But as soon as we use some browsers this test suite is run in one browser only.

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem via

  • What is the expected behavior?
    It would be nice to specify any test suite or test to be run in every browser. For example, if we can specify ~test-fixture.spec.ts to be run in every browser instance then we will be able to check all our generic cases in all parallel browser instances.

  • What is the motivation / use case for changing the behavior?
    Allow to check any predefined checks for all browser instances.

  • Please tell us about your environment:

  • Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

Safari does execute all tests for each shard

  • **I'm submitting a ... **

    • bug report
    • feature request
  • Do you want to request a feature or report a bug?

I want to report a bug that occurs in the current version of Safari. I discovered this package only today, which is why I can't really tell for how long it is already present. I assume it is related to Safari's restrictive handling of cookies. I noticed that Safari will always execute all tests. I figured out that the cookie never gets send when karma is issuing requests from inside of its iframe. I tried several things to get it working, but in the end I implemented a solution which doesn't use cookies anymore. Instead of replacing %KARMA_SHARD_INFO% with the information for only one shard I modified the code to replace it with the info for all shards. The client side script does then pick the relevant information based on the query param of the parent.location.href. Would you like me to open a pull request with these changes?

  • Please tell us about your environment:
  • version: 2.0.4
  • Browser: [Safari 11.1.2]
  • Language: [TypeScript with webpack]

Random failures with uncaught errors and missing tests

  • **I'm submitting a ... **

    • [ x ] bug report
    • feature request
  • What is the current behavior?
    I'm getting random failures, but the two most common ones are:

  1. Uncaught thrown
    2019-09-30T19:09:52.2574352Z 13 verbose stack at EventEmitter. (C:\Program Files\nodejs\node_modules\npm\node_modules\npm-lifecycle\index.js:301:16)
    2019-09-30T19:09:52.2574398Z 13 verbose stack at EventEmitter.emit (events.js:189:13)
    2019-09-30T19:09:52.2574455Z 13 verbose stack at ChildProcess. (C:\Program Files\nodejs\node_modules\npm\node_modules\npm-lifecycle\lib\spawn.js:55:14)
    2019-09-30T19:09:52.2574500Z 13 verbose stack at ChildProcess.emit (events.js:189:13)
    2019-09-30T19:09:52.2574656Z 13 verbose stack at maybeClose (internal/child_process.js:970:16)
    2019-09-30T19:09:52.2574720Z 13 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:259:5)

  2. Uncaught Error: No Handler could handle this request. [object Object] thrown
    13 verbose stack at EventEmitter. (C:\Program Files\nodejs\node_modules\npm\node_modules\npm-lifecycle\index.js:301:16)
    13 verbose stack at EventEmitter.emit (events.js:189:13)
    13 verbose stack at ChildProcess. (C:\Program Files\nodejs\node_modules\npm\node_modules\npm-lifecycle\lib\spawn.js:55:14)
    13 verbose stack at ChildProcess.emit (events.js:189:13)
    13 verbose stack at maybeClose (internal/child_process.js:970:16)
    13 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:259:5)

Note: this does not happen all the time, only 25% and the tests are really simple
it('should be created', () => {
expect(component).toBeTruthy();
});

I also noticed that the tests that did pass simply, skipped over half the tests. For example:
Passed: had a total of 228
Failed: had a total of 526 (in which one would always fail for some random uncaught errors)
Does anyone know what this could be?

  • Please tell us about your environment:
    "karma": "^4.3.0",
    "karma-chrome-launcher": "^3.1.0",
    "karma-cli": "~1.0.1",
    "karma-coverage-istanbul-reporter": "~2.0.1",
    "karma-jasmine": "~1.1.2",
    "karma-jasmine-html-reporter": "^0.2.2",
    "karma-parallel": "^0.3.1",
    "karma-spec-reporter": "0.0.32",
    "ng-bullet": "^1.0.3",
  • version: 2.0.0-beta.X
  • Browser: [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ]
  • Language: [all | TypeScript X.X | ES6/7 | ES5 | Dart]

karma-parallel do not return exit code 1 anymore on too low code coverage

  • I'm submitting a

    • bug report
  • What is the current behavior?
    when I run Istanbul code coverage using:
    ng test --source-map --code-coverage
    tests do not return exit code 1 on code coverage below threshold anymore, so it is hard to detect coverage errors in CI

  • If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

  1. set the threshold for code coverage using Istanbul in karma.conf
    coverageIstanbulReporter: {
      dir: require('path').join(__dirname, '../coverage'),
      reports: ['html', 'lcov', 'text', 'text-summary', 'cobertura'],
      fixWebpackSourcePaths: true,
      thresholds: {
        statements: 100,
        lines: 100,
        branches: 100,
        functions: 100
      }
    },
  1. run ng test --source-map --code-coverage
  • What is the expected behavior?
    ng test --source-map --code-coverage
    tests should return exit code 1 on code coverage below threshold

  • Please tell us about your environment:

  • version: 0.3.1
  • Browser: Chrome Version 72.0.3626.121 (Official Build) (64-bit)
  • Language: TypeScript 3.2.4, Angular 7

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.