One tool to do cross-browser testing across multiple cross-browser testing platforms, runners and frameworks.
See the documentation
Seamless cross-browser testing with multiple platforms and runners
License: MIT License
One tool to do cross-browser testing across multiple cross-browser testing platforms, runners and frameworks.
See the documentation
As of now it does not for BrowserStack case, and struggles for SauceLabs anyway (#20). Required behavior is to be able to set the status as per the test data.
As of now cbtr-init
overwrites the output test settings file if it exists already. It does not try to read if it exists and update only the browsers. The resultant experience is not very user-friendly.
Should there be an option to merge the browsers as well? Or this remains 'overwrite-only'?
As of now there are 2 binaries - one for BrowserStack and one for SauceLabs. As the number of cross-browser testing platforms integrated increases, this pattern leads to too many binaries to know of and use.
There should be just one binary which can take a '--platform' input and does things internally which the current 2 tools do, and in future we should extend this tool as we add more cross-browser platforms.
In the "evaluate-all" build that tries to run a Jasmine 1.x test on all browsers of a platform, several jobs that were for SauceLabs major browser sets for Chrome, Firefox and Mobile Safari throw up this error for several jobs. In all cases the server side error (can be seen on the job link) is that of the named tunnel used by a session not existing on the server.
The following needs to be investigated/fixed:
On SauceLabs, Linux OS has no corresponding OS version; while our structure assumes one to be there all the time, and hence "None" OS version was invented. Now, this is getting passed in the OS/platform input argument to JS test creation and failing the tests.
Tests need to be added for this error. A mechanism to handle this error needs to be devised. Should this be handled by native runner or platform?
Native Runner uses runMultiple to create new tests. In case one of the jobs fails, it leads to not creating other jobs and other messy combinations where a session may be actually created but the failure means the corresponding job structure is not created by Platform layer and nothing is returned to native runner.
The whole workflow of creating jobs using runMultiple and run (for retries) needs to be cleaned up so this is taken care of:
Unhandled rejection Error: Platforms.Core.Platform: manager returned a tunnel with pid 4939 tunnel-id cbt-analyzer-crossbrowsertesting that was not created by platform
at findPlatformTunnel (/home/travis/build/[secure]/cbt-analyzer/node_modules/[secure]/lib/platforms/core/platform.js:269:9)
at Manager.withId.then.procs (/home/travis/build/[secure]/cbt-analyzer/node_modules/[secure]/lib/platforms/core/platform.js:233:18)
at tryCatcher (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/util.js:16:23)
at Promise._settlePromiseFromHandler (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/promise.js:512:31)
at Promise._settlePromise (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/promise.js:569:18)
at Promise._settlePromise0 (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/promise.js:614:10)
at Promise._settlePromises (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/promise.js:693:18)
at Async._drainQueue (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/async.js:133:16)
at Async._drainQueues (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/async.js:143:10)
at Immediate.Async.drainQueues (/home/travis/build/[secure]/cbt-analyzer/node_modules/bluebird/js/release/async.js:17:14)
at runCallback (timers.js:789:20)
at tryOnImmediate (timers.js:751:5)
at processImmediate [as _immediateCallback] (timers.js:722:5)
Traces like this can be seen in jobs that run tests on crossbrowsertesting browsers. For example: https://travis-ci.org/cross-browser-tests-runner/cbt-analyzer/jobs/298030788
This should have never happened in any given build job.
The short forms of command line options have not been tested in the functional tests. Need to add tests, verify and fix any issues found.
Seen in: https://travis-ci.org/browsejs/browse.js/jobs/285042162, https://travis-ci.org/browsejs/browse.js/jobs/285042163
It is clear to see that the tunnel died somehow and the platform monitor tried to stop it. However; somehow the tunnel got restarted on its own, which could be the feature of SauceLabs tunnels. Now, the tunnel start code thrown an error that a conflicting tunnel is already running. The worst part is that even though promise structure looks fine, this error ends up as unhandled rejection. At the top most layer of the monitor, there is a catch block but this error is not reaching there.
The fishy thing around SauceLab tunnel is that this may be a case of an unexpected exit; but no error has been printed. It looks like tunnel exited normally. Is there some race condition somewhere?
Native runner gets to "know" of running tests after runMultiple or runMultipleScripts promises return. Sometimes, a small sample tests gets completed pretty fast on platforms like SauceLabs and CrossBrowserTesting and it sends the test report before the state of running tests can be created in native runner. This race condition leads to the situation where the test results sent by the fast sessions are ignored because native runner does not know of these tests yet.
A browser responds with tests results, the logs are printed and few seconds later the monitor logs appear saying the browser did not respond with results.
CircularJSON errors occur time to time with certain tests.
Verify browser support (with some lowest ones) for https://github.com/WebReflection/circular-json/blob/master/build/circular-json.js, and include.
CrossBrowserTesting.com testing support:
For browse.js, build/test/project are not being marked.
Currently it is: node istanbul cover js-file -- arguments
On Windows it has to be: node istanbul cover node -- js-file arguments
This would help easily figure out how different browsers ran the same tests.
Support for:
CrossBrowserTesting does support named tunnels (but it's not documented). It was seen that the behavior is very similar to SauceLabs tunnels.
Add support for named tunnels.
The messages created using log.* do get timestamps. However; several messages that are created using console.* by native runner and other binaries do not have timestamp which they should.
The main issue is raised here: testem/testem#1142. As of now, testing indicates that testem logic is the cause of this.
There is no reason to have test behavior/flow related options split across settings json and command line. Either it should be all in one place (except the --native-runner switch) - on command line (not convenient), or in the settings file (better user experience). There could be this model of overriding things from settings json using a command line parameter; but that's just frivolous code addition.
Move all the command line options that go with native runner to the settings json
Initial idea was that with cross-browser testing, sometimes an error in a test can be an inherent behavior, which needs to persist and can't be changed, and a tester may have reasons to keep the test in the build. Keeping this in mind, native runner does not mark the build as failed currently, and it always passes.
Rather than getting to a polarized design around this, it would be better if there is an option provided to the user to choose what the behavior should be - whether a test failure leads to build failure or not.
Selenium jobs often fail with ETIMEDOUT errors with BrowserStack. The code should retry creating such jobs.
As seen in this build: https://travis-ci.org/cross-browser-tests-runner/cross-browser-tests-runner/builds/270062308.
This seems to be the issue with testem raised here: testem/testem#1142. Also recorded in #2.
Afresh debugging is required to find out the root cause of this.
In platforms/core/platform.js, in scheduleScript, the script-job's 'browserstack.debug' option's presence is checked to decide where to take a screenshot. It should use a platform-agnostic method e.g. scriptJob.needsScreenshot().
https://travis-ci.org/browsejs/browse.js/jobs/285042163
The job creation returns (in all cases) so we never get into the clean exit post job creation failure. But SauceLabs shows invalid os/browser combination error.
As seen since the very beginning of Windows builds, file creation/handling related issues exist with 'sync' version of APIs. As more components have been added, several new tests written for them have been disabled in Windows build because of this experience, and some of them have never been tried even once before being disabled.
Minimize the forced disabling of such tests based on real testing.
There are several points where local testing is enforced or assumed. This is a big limitation, and the only correct integration to apply this is for testem, which should remain. Everywhere else, this limitation must be removed.
This would cut some work on the users' part of running each platform's update separately.
Platform specific updates executable files should remain to allow users to do specific updates if required as well.
Hello. Can I use it in inline code js with node js?
With crossbrowsertesting.com, tunnels die often. This is a known behavior that if a tunnel of given names does not exist while creating a job then this is thrown. However; this currently results in an Unhandled Rejection which jeopardizes the whole set of jobs being created by native runner. So if 1 of the 5 jobs in the current set being created encounters this error, the other 4 get left out as well.
There should be a mechanism to retry creation of a job that fails with this error; because platform would definitely try to get a died tunnel up and running so this job would get created sooner or later.
There are several ways this can be tackled:
Another important point to add is that different platforms may need different retry counts due to the difference in their tunnel stability levels. It may be good to set retries = 2 for crossbrowsertesting.com tests.
It failed an app that used cross-browser-test-runner on Linux.
Let's say one calls open
such that it creates a BrowserStack tunnel without identifier. Now it is called again so that the attempt to create a BrowserStack tunnel with identifier is made. While doing the latter, the former/first tunnel would be killed as per achieving a sane Platform behavior around tunnels; but then the platform monitor would try to bring it back. A series of integration tests created this condition, and as of now some tests have been removed to avoid this condition.
It seems freezing the call to open to occur only once can save us from the extra management hassle (point no 1). However; this also means we need to go through all the tests to see if unit tests, and all integration/functional tests where the state of the server is persistent hold true. However; worrying more about point no 2, and fixing the tests after freezing open
should be the right way. This also aligns with the fact that when testem or any other test config wants multiple tunnels, it can be done; but whether a single tunnel or multiple tunnels - need to be specified once for the life of the tests that are going to run.
As of now only a tunnel identifier can be passed to the Platform layer through open
and run*
APIs, which limits usage of a single tunnel alone. Several multiple tunnel related tests at higher layers have been removed, and currently asking question of "do you want to use multiple tunnels" is useless because if the user wants multiple tunnels, the tests eventually won't run, except in some narrow cases.
Tunnels should be treated equal to browsers and capabilities when it comes to importance, and a clean way to get rid of the limitations would be to pass platform-specific tunnel input at the Platform layer, so all integration with the Platform APIs can start expecting more fluid behavior and various multiple tunnels related tests can become alive again.
Though the tests for cross-browser-tests-runner seem to use CI environment to get project, test, session and build values (as verified/seen with BrowserStack automate console), things do not work for browse.js build, where defaults show up for project, test, session and build in BrowserStack automate console show up.
Seen in https://travis-ci.org/cross-browser-tests-runner/cross-browser-tests-runner/builds/270062308.
Sometimes tunnels closure errors with 'retries going on', which means that a tunnel could not close in the given retries. As of now, this is not a recoverable condition, as everything ahead depends on a successful closure of the tunnel.
Stopping a job should mark it as successful. There is a bit of jugglery around the "passed" attribute; however, things should be marked as passed in all the tests we are running so far. However; it's not happening. Need to debug and fix.
Even if a tunnel on the test server side dies, the corresponding tunnel process inside crossbrowsertesting system does not and as the platform monitor restarts a tunnel, there is an accumulation of system side tunnels eventually leading to this error when a Job is created.
A new interface 'restart' is needed on Tunnel, such that when a tunnel dies the monitor should call 'restart' and not 'start'. The new method should take care of clean-ups and then starting the process. Relative to this issue, the clean-up here would be to ensure that the server side tunnel dies as well (stop it using the tunnel stop REST API).
This change should also apply to SauceLabs.
As of now several errors are not reflected in the exit code:
These should be taken into account to set a process exit status.
On Appveyor npm install
fails for node v4 and node v5.
SauceLabs tunnels fail with various reasons.
The code should retry to start the tunnel again on these 2 errors
It's a regression caused by the migration to using Values abstraction.
When the osVersion is set to "None", it should not get into the platform name capability.
server.js
file included due to a bug in package.json
.cbtr-browsers.yml
file, the tool cbtr-init
failsrequire(inputFile)
at line#38 does not work in cbtr-testem-browserstack-init
unless complete path is specified for the input file (--input
option)If a job fails to create, native runner does not notice it and declares "passed" at the end. It should be able to detect such failures and say "failed" in case retries cannot take care of such failed jobs.
https://travis-ci.org/cross-browser-tests-runner/cbt-analyzer/jobs/298030780
This runs tests on set of Mobile Safari browsers supported by crossbrowsertesting. In this job, it seems the native runner got stuck somewhere and did not quit. It is not clear if it is still waiting for a particular browser to respond or not.
Here it can be seen that crossbrowsertesting tunnels die every 3-4 minutes.
As of now the capabilities and browser parameters that can be passed are limited to certain commonly used ones. A user may need to use other capabilities, which is not allowed today by design. This needs to be changed to make the system really transparent and extremely flexible.
node_modules/.bin/cbtr-crossbrowsertesting-update link does not exist. Fix package.json to get this in.
This happens because of using Build number and Job number both while deriving the session ID. So far seen with Appveyor and Travis (Circle has a different mechanism altogether).
As of now OS and OS version are checked against the global cbtr-conf.json
, which is a bug as well as bad user experience because the errors generated if any provide the user with a much larger set of OS and OS versions (collected across multiple cross-browser testing platforms) than may be available with the platform the user is trying to go for.
Is there still value in storing OS and OS versions in cbtr-conf.json
then, or should it have only exclusive content like the set of browsers aliases that has to be prepared manually?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.