Coder Social home page Coder Social logo

cross-browser-tests-runner / cross-browser-tests-runner Goto Github PK

View Code? Open in Web Editor NEW
5.0 3.0 1.0 555 KB

Seamless cross-browser testing with multiple platforms and runners

License: MIT License

JavaScript 99.73% HTML 0.27%
cross-browser javascript-testing unit-testing browserstack testem test-runner jasmine selenium-tests saucelabs crossbrowsertesting

cross-browser-tests-runner's Introduction

Build Status Build status codecov Coverage Status Scrutinizer Code Quality Code Climate bitHound Dependencies bitHound Dev Dependencies npm npm downloads

cross-browser-tests-runner

One tool to do cross-browser testing across multiple cross-browser testing platforms, runners and frameworks.

Documentation

See the documentation

Acknowledgements

BrowserStack SauceLabs CrossBrowserTesting Travis CI Appveyor

cross-browser-tests-runner's People

Contributors

reeteshranjan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

jahmed2345

cross-browser-tests-runner's Issues

Add 'cbtr-testem-init' that takes 'platform' input and generates testem.json accordingly

As of now there are 2 binaries - one for BrowserStack and one for SauceLabs. As the number of cross-browser testing platforms integrated increases, this pattern leads to too many binaries to know of and use.

There should be just one binary which can take a '--platform' input and does things internally which the current 2 tools do, and in future we should extend this tool as we add more cross-browser platforms.

[saucelabs] WebDriverError: Sauce could not start your job

In the "evaluate-all" build that tries to run a Jasmine 1.x test on all browsers of a platform, several jobs that were for SauceLabs major browser sets for Chrome, Firefox and Mobile Safari throw up this error for several jobs. In all cases the server side error (can be seen on the job link) is that of the named tunnel used by a session not existing on the server.

The following needs to be investigated/fixed:

  • Why does Sauce say that tunnel was not found. In several jobs, a dead tunnel indication is not there. So what is going wrong with Sauce?
  • If this error comes, how to handle it? In Platform layer or in native runner?

Failure in creation of one job fails creation of others in the same set

Native Runner uses runMultiple to create new tests. In case one of the jobs fails, it leads to not creating other jobs and other messy combinations where a session may be actually created but the failure means the corresponding job structure is not created by Platform layer and nothing is returned to native runner.

The whole workflow of creating jobs using runMultiple and run (for retries) needs to be cleaned up so this is taken care of:

  • If one job fails, creation of other jobs should not be hampered, either at Platform layer or through retries by native runner
  • native runner should get to know which jobs did not get created so it would be able to retry

[crossbrowsertesting] manager returned a tunnel ... that was not created by platform

Unhandled rejection Error: Platforms.Core.Platform: manager returned a tunnel with pid 4939 tunnel-id cbt-analyzer-crossbrowsertesting that was not created by platform
    at findPlatformTunnel (/home/travis/build/[secure]/cbt-analyzer/node_modules/[secure]/lib/platforms/core/platform.js:269:9)
    at Manager.withId.then.procs (/home/travis/build/[secure]/cbt-analyzer/node_modules/[secure]/lib/platforms/core/platform.js:233:18)
    at tryCatcher (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/util.js:16:23)
    at Promise._settlePromiseFromHandler (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/promise.js:512:31)
    at Promise._settlePromise (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/promise.js:569:18)
    at Promise._settlePromise0 (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/promise.js:614:10)
    at Promise._settlePromises (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/promise.js:693:18)
    at Async._drainQueue (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/async.js:133:16)
    at Async._drainQueues (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/async.js:143:10)
    at Immediate.Async.drainQueues (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/async.js:17:14)
    at runCallback (timers.js:789:20)
    at tryOnImmediate (timers.js:751:5)
    at processImmediate [as _immediateCallback] (timers.js:722:5)

Traces like this can be seen in jobs that run tests on crossbrowsertesting browsers. For example: https://travis-ci.org/cross-browser-tests-runner/cbt-analyzer/jobs/298030788

This should have never happened in any given build job.

  • Is this something to do with Travis environment?
  • If not, investigate the cause of this in code.

Platform monitor tries to restart dead SauceLab tunnel but somehow it's restarted already and the code that prevents starting conflicting tunnels throws error which ends up as unhandled rejection

Seen in: https://travis-ci.org/browsejs/browse.js/jobs/285042162, https://travis-ci.org/browsejs/browse.js/jobs/285042163

It is clear to see that the tunnel died somehow and the platform monitor tried to stop it. However; somehow the tunnel got restarted on its own, which could be the feature of SauceLabs tunnels. Now, the tunnel start code thrown an error that a conflicting tunnel is already running. The worst part is that even though promise structure looks fine, this error ends up as unhandled rejection. At the top most layer of the monitor, there is a catch block but this error is not reaching there.

The fishy thing around SauceLab tunnel is that this may be a case of an unexpected exit; but no error has been printed. It looks like tunnel exited normally. Is there some race condition somewhere?

Race conditions between native runner and extremely quick session response

Native runner gets to "know" of running tests after runMultiple or runMultipleScripts promises return. Sometimes, a small sample tests gets completed pretty fast on platforms like SauceLabs and CrossBrowserTesting and it sends the test report before the state of running tests can be created in native runner. This race condition leads to the situation where the test results sent by the fast sessions are ignored because native runner does not know of these tests yet.

CrossBrowserTesting named tunnel support

CrossBrowserTesting does support named tunnels (but it's not documented). It was seen that the behavior is very similar to SauceLabs tunnels.

Add support for named tunnels.

Most of native runner options should be part of settings json

There is no reason to have test behavior/flow related options split across settings json and command line. Either it should be all in one place (except the --native-runner switch) - on command line (not convenient), or in the settings file (better user experience). There could be this model of overriding things from settings json using a command line parameter; but that's just frivolous code addition.

Move all the command line options that go with native runner to the settings json

Option for native runner to mark the build as failed or passed

Initial idea was that with cross-browser testing, sometimes an error in a test can be an inherent behavior, which needs to persist and can't be changed, and a tester may have reasons to keep the test in the build. Keeping this in mind, native runner does not mark the build as failed currently, and it always passes.

Rather than getting to a polarized design around this, it would be better if there is an option provided to the user to choose what the behavior should be - whether a test failure leads to build failure or not.

[Windows] minimize tests disabling on Windows

As seen since the very beginning of Windows builds, file creation/handling related issues exist with 'sync' version of APIs. As more components have been added, several new tests written for them have been disabled in Windows build because of this experience, and some of them have never been tried even once before being disabled.

Minimize the forced disabling of such tests based on real testing.

Local testing should not be enforced except for testem

There are several points where local testing is enforced or assumed. This is a big limitation, and the only correct integration to apply this is for testem, which should remain. Everywhere else, this limitation must be removed.

Node js

Hello. Can I use it in inline code js with node js?

[crossbrowsertesting] "The named tunnel you have specified does not appear to exist" error

With crossbrowsertesting.com, tunnels die often. This is a known behavior that if a tunnel of given names does not exist while creating a job then this is thrown. However; this currently results in an Unhandled Rejection which jeopardizes the whole set of jobs being created by native runner. So if 1 of the 5 jobs in the current set being created encounters this error, the other 4 get left out as well.

There should be a mechanism to retry creation of a job that fails with this error; because platform would definitely try to get a died tunnel up and running so this job would get created sooner or later.

There are several ways this can be tackled:

  • While creating the job, retry if getting this error
  • Let native runner know (for an input set provided to runMultiple) which jobs did not get created, which it can queue to retry list and try again.

Another important point to add is that different platforms may need different retry counts due to the difference in their tunnel stability levels. It may be good to set retries = 2 for crossbrowsertesting.com tests.

Fix semantics of Platform::open relative to inter-behavior of tunnels

Let's say one calls open such that it creates a BrowserStack tunnel without identifier. Now it is called again so that the attempt to create a BrowserStack tunnel with identifier is made. While doing the latter, the former/first tunnel would be killed as per achieving a sane Platform behavior around tunnels; but then the platform monitor would try to bring it back. A series of integration tests created this condition, and as of now some tests have been removed to avoid this condition.

  1. Should there be platform-specific logic such that a tunnel to be dead gets removed from platform's set of tunnels so monitoring logic does not consider it?
  2. But what happens to a test that required the tunnel that has been cleaned up? We have to worry about this, as otherwise a user's tests break.

It seems freezing the call to open to occur only once can save us from the extra management hassle (point no 1). However; this also means we need to go through all the tests to see if unit tests, and all integration/functional tests where the state of the server is persistent hold true. However; worrying more about point no 2, and fixing the tests after freezing open should be the right way. This also aligns with the fact that when testem or any other test config wants multiple tunnels, it can be done; but whether a single tunnel or multiple tunnels - need to be specified once for the life of the tests that are going to run.

Tunnel Arguments Passing

As of now only a tunnel identifier can be passed to the Platform layer through open and run* APIs, which limits usage of a single tunnel alone. Several multiple tunnel related tests at higher layers have been removed, and currently asking question of "do you want to use multiple tunnels" is useless because if the user wants multiple tunnels, the tests eventually won't run, except in some narrow cases.

Tunnels should be treated equal to browsers and capabilities when it comes to importance, and a clean way to get rid of the limitations would be to pass platform-specific tunnel input at the Platform layer, so all integration with the Platform APIs can start expecting more fluid behavior and various multiple tunnels related tests can become alive again.

CI environment information is not used by native runner

Though the tests for cross-browser-tests-runner seem to use CI environment to get project, test, session and build values (as verified/seen with BrowserStack automate console), things do not work for browse.js build, where defaults show up for project, test, session and build in BrowserStack automate console show up.

SauceLabs jobs are not being marked as successful

Stopping a job should mark it as successful. There is a bit of jugglery around the "passed" attribute; however, things should be marked as passed in all the tests we are running so far. However; it's not happening. Need to debug and fix.

[crossbrowsertesting] "Too many tunnels with that name found; this should never happen." error

Even if a tunnel on the test server side dies, the corresponding tunnel process inside crossbrowsertesting system does not and as the platform monitor restarts a tunnel, there is an accumulation of system side tunnels eventually leading to this error when a Job is created.

A new interface 'restart' is needed on Tunnel, such that when a tunnel dies the monitor should call 'restart' and not 'start'. The new method should take care of clean-ups and then starting the process. Relative to this issue, the clean-up here would be to ensure that the server side tunnel dies as well (stop it using the tunnel stop REST API).

This change should also apply to SauceLabs.

Retry on SauceLabs tunnel connection failures

SauceLabs tunnels fail with various reasons.

  1. didn't come up after 1m error
  2. could not establish connection error

The code should retry to start the tunnel again on these 2 errors

v0.1.1 - missing server.js and other issues

  • Command to start the server fails as the npm package does not have the server.js file included due to a bug in package.json
  • If browser versions/devices are specified inline (not in lines beginning with "-") in the .cbtr-browsers.yml file, the tool cbtr-init fails
  • The require(inputFile) at line#38 does not work in cbtr-testem-browserstack-init unless complete path is specified for the input file (--input option)

Should be able to pass more platform-specific or generic capabilities

As of now the capabilities and browser parameters that can be passed are limited to certain commonly used ones. A user may need to use other capabilities, which is not allowed today by design. This needs to be changed to make the system really transparent and extremely flexible.

cbtr-init should cross-check OS and OS version against platform-specific browsers configuration

As of now OS and OS version are checked against the global cbtr-conf.json, which is a bug as well as bad user experience because the errors generated if any provide the user with a much larger set of OS and OS versions (collected across multiple cross-browser testing platforms) than may be available with the platform the user is trying to go for.

Is there still value in storing OS and OS versions in cbtr-conf.json then, or should it have only exclusive content like the set of browsers aliases that has to be prepared manually?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.