avajs / ava Goto Github PK
View Code? Open in Web Editor NEWNode.js test runner that lets you develop with confidence ๐
License: MIT License
Node.js test runner that lets you develop with confidence ๐
License: MIT License
Should be a flag, like $ ava --tap
, to get TAP output. This would be a good way to enable reporters without actually supporting reporters. Can just refer people to all the available TAP reporters instead of reinventing the wheel.
From #9 (comment).
Does it makes sense to have this?
--bail
, --fail-fast
, other?
On that note, any thoughts on failing-fast by default? Any reason you'd want the test to complete before failing? I'm sure there are since test runners usually run to completion. Just useful to have this documented either way.
Adding some sort of before
and after
hooks might be useful. Especially for creating and removing fixtures or any other generated files after tests have finished.
The text AVA
should only use V
and just turn it upside down for the A's.
Too early to add just yet, but we can at least discuss it.
test(async t => {
const ret = await foo();
t.is(ret, 'unicorn');
});
Hot!
I think what would be required is to add support for accepting a promise in the test
function and enabling the experimental es7.asyncFunctions
Babel transformer. Spec.
Promises are silent by default and it's easy to forget handling a promise somewhere down the chain or forget to return the promise in the test. Using loud-rejection
, we can throw in those cases so the user know they did something wrong.
Need to wait on sindresorhus/loud-rejection#1.
Note to self: Document this in the readme too.
When using await
/async
, the throws() assertion doesn't work.
test('throws error', async t => {
t.throws(await fn(), Error);
});
Would be nice to support ES6 built-in at runtime by using the Babel require hook.
That way you can just write your tests in ES6 and have it run in any node version.
This is so nice:
test(t => {
t.ok();
t.end();
});
I think the default reporter should be super minimal one. It should only log out detailed test info if a test fails, otherwise just 1 test passed
.
It could have a live counter, so while tests are running the line is dynamically updated with 3/12 tests passed
, and when done rewritten to be 12 tests passed
. That way it can be live, but also minimal.
Can use log-update.
I think we might have to add this. Some things just can't be ran concurrently.
The serial tests can be ran first, then we run the rest concurrently.
Like when you need to overwrite process.stdout.write
. If you do it concurrently they might overwrite each other. Example: https://github.com/sindresorhus/beeper/blob/43045b8b3cda9a6f1808014f97d4c21f0bc3b89e/test.js
Low-priority though.
Currently we don't have any options, no configuration, but we will need such thing as we go further.
As @sindresorhus said in this comment we can read CLI arguments using process.argv
, but there should be a real API for configuring test interface, like this:
var test = require('ava');
test.config({ fullTrace: true });
If we have this, we can extend the config with CLI arguments. Individual tests' configurations should override CLI arguments.
@sindresorhus Please label as question.
So I have this repo setup to with npm test
as istanbul cover ava -- test/**/*.spec.js
.
However running npm test
results in the tests running but istanbul exits saying, No coverage information was collected, exit without writing coverage information
.
Which is weird because simply running istanbul cover ava -- test/**/*.spec.js
manages to collect the coverage information just fine.
I've pinned the difference down to the published npm version(0.2.0) and the version on github. The only reason I started using the GH version was for beforeEach.
We should prefix the tests with the filename minus extension and prefix test-
so this isn't needed.
test-url.js
with test('foo bar')
would be:
โ url - foo bar
// @floatdrop
this._skip
is still false
after running .skip()
. It needs .bind(this)
to work.
var i = 0;
ava(function (a) {
a.skip(); // a.skip.bind(this) works
i++;
a.end();
}).run(function () {
t.is(i, 0);
t.end();
});
This fails with i
being 1
.
Even though I usually prefer just node test.js
. Having the CLI is useful when you have multiple test files.
It should support:
--debug
flag?)Found here - kevva/download#77 (comment)
Consider this test file:
var test = require('ava');
test('no asserts', function (t) {
// still passes
});
ava
will exit silently without no asserts
test in output.
When trying to log the stack along the results in in index.js
, you only get the err.message
. Somehow the stack gets lost in the callback. When logging err.stack
right before the callback you get the correct stack so it should be there.
https://github.com/sindresorhus/ava/blob/master/lib/test.js#L79
Guess this needs fixing before we can really get going with #6.
For running everything serially so it's easier to debug concurrency issues.
In the future it might do other things too to aid in debugging. Suggestions welcome.
t.plan
isn't working when using async
/await
.
test('generate screenshots', async t => {
t.plan(2);
const streams = await new Pageres()
.src('yeoman.io', ['480x320', '1024x768', 'iphone 5s'])
.src('todomvc.com', ['1280x1024', '1920x1080'])
.run();
t.is(streams.length, 5);
streams[0].once('data', data => t.true(data.length > 1000));
});
This will throw an error Assertion count does not match planned
. When using t.plan(1)
it works correctly but off course doesn't test the length of the data.
This was pointed out here sindresorhus/pageres#213 (comment).
One thing that's nice about nested tests is being able to support different before, after, beforeEach, and afterEach functions.
Would probably be a pretty large refactor, but I could see API looking like this:
test('a model', function(t, test) {
test('should do this', function (t) {
// do test
})
})
SIGSEGV occurs when a program performs a native memory operation that isn't valid (i.e. referencing/writing to a piece of unallocated memory).
This is important for native module programming, as if it segfaults it'd be nice to gracefully handle it as a test failure instead of terminating the offending process - especially when there are lots of tests.
However, there should be discussion on it as I believe it requires you use fork
(thought not entirely sure). I'd have to do more research.
Not entirely sure this is relevant to windows, either (with windows, there isn't a great way to recover from an access violation) - you'd probably need to write a native module with /EHsc
in the flags and wrap something in a try/catch statement (though that's considered icky and probably wouldn't work with V8).
Currently we aren't keeping track of all the pending assertions. tape
is using this https://github.com/substack/tape/blob/master/lib/test.js#L174-L181.
Would be cool to implement actual parallelism. Could be as simple as executing each test file in a separate node process using child_process.fork
. Each node sends back its result and the main process merges it and presents it to the user.
Doing it in sub-processes comes with the added benefit of each file being isolated and not being able to screw up the environment for others.
Concurrency is not parallelism. It enables parallelism. It's about dealing with, while parallelism is about doing, lots of things at once.
This is kinda critical as it gives false success when you use t.end()
in an async callback.
See: sindresorhus/is-online#17 (comment)
Example this makes the tests pass, even though it's should have failed:
test(function (t) {
asyncFn(function (err, isAFalseValue) {
t.assert(isAFalseValue);
t.end();
});
});
This works however:
test(function (t) {
t.plan(1);
asyncFn(function (err, isAFalseValue) {
t.assert(isAFalseValue);
});
});
Would love to use ava with React, JSX and babel
I don't see any option to provide a custom compiler like Mocha, is there one?
$ ava does-not-exist
I would expect .ifError(err)
to check that the passed-in argument is, in fact, an error.
Instead what it does is check if err
is falsy.
How is this different than doing .notOk(err)
?
Should it be instead renamed to .ifNotError(err)
?
Or better yet, change it to check if the argument is an error.
Thoughts?
Thank you!
Should we have some sort of timeout on the tests?
Currently we just throw an Error: https://github.com/sindresorhus/ava/blob/3ed5f9d96c812c18d145f5fa1bc941859a5b5318/lib/test.js#L117
This isn't very helpful. It should probably instead be an uncounted AssertionError
.
How to use it? For example, Chai is very nice. Or assert.js
Tape has this nice output on errors:
at: Test.<anonymous> (/Users/sindresorhus/dev/hypot/test.js:9:4)
Currently we output the full stack trace or none.
This might be something claim
should handle.
https://github.com/substack/tape/blob/ccfcf4f76581abc3842163dfe065198764247c85/lib/test.js#L207
Should show the err.stack
for each failed test imo. Also, we need to sort out the error messages some I think.
Update: Blocked by #696
Seems like it is not quite good idea to spawn a fork for every test file. I have try'd fresh version of ava
in got and tests run time get worse (not to mention, that iTunes is getting icky while tests).
May be it is better to fork batched runners and/or have a concurrency
flag?
Currently the cwd is the directory you execute ava
in, but ava
can be executed from any upper level in the project and will search down for test files, so the cwd is pretty arbitrary and useless. I want to make the cwd actually useful.
There are two options:
The reason I prefer the latter is that we run the test files hence IMHO the cwd is where we run it. It's like we would do: cd tests && node test.js
.
Currently you have to do this to be able to load a read a test relative file:
var path = require('path');
var fs = require('fs');
fs.readFileSync(path.join(__dirname, 'foo.json'));
If we changed process.cwd()
to the currently running test directory we could instead just do:
var fs = require('fs');
fs.readFileSync('foo.json');
Thoughts? Anything I'm missing?
We do a lot of async stuff in AVA. Would be cleaner by using promises instead.
For example here: https://github.com/sindresorhus/ava/pull/36/files#diff-b3b53682a18f203ac8d29b0e277cad26R97
Should use https://github.com/floatdrop/pinkie-promise and https://github.com/sindresorhus/pify.
When we've settled on an API we should start documenting stuff.
It's too big to be in the core of AVA and think it should be available only through option like --esnext
or something like that. Then user just will install it only if they write in ES2015.
if (cli.flags.esnext) {
try {
require(resolveFrom('.', 'babel-core/register') || resolveFrom('.', 'babel/register'));
} catch (err) {
require('babel-core/register');
}
}
I think we should apply babel
only to test files, e.g test.js
, test/test-something.js
.
Currently, if you have this common code:
const test = require('ava');
const mylib = require('./');
mylib
will also be run through babel
and that is not good, because that way AVA transforms library's source code. This behavior could be fixed by setting a regex for require('babel/register')
to whitelist only test files.
Add execution time for each test.
Trying to get this working in the browser, and then later hihat for debugging/breakpoints/etc.
After browserifying the demo in the readme, I get the following:
Uncaught ReferenceError: setImmediate is not defined
(weird cause I thought browserify would have shimmed this)
$ ava --watch
Is this a plan?
Assertion methods that should be implemented. Not sure about the names of those with a question mark after them.
t.is()
t.not()
t.same()
t.notSame()
?t.throws()
t.doesNotThrow()
?Maybe we should export them to their own file later on so that lib/test.js
won't get huge.
You should be able to return a promise instead of calling the t.end()
callback.
var test = require('ava');
test(function (t) {
return db.find('something').then(function (foo) {
t.assert(foo);
});
});
I've found I very often need to assert something using a regex .test()
.
I currently do:
assert(/markdown-body/.test(css));
But if that fails the error message isn't very helpful.
I could include css
in the second argument, but I still wouldn't know what the regex was.
Could maybe be useful to have an assertion method for this:
assert.reTest(/markdown-body/, css);
Which on failure would print the regex and the full content of css
.
Needs a better name. And probably needs a .notReTest
too.
@kevva thoughts?
In a lot of my node modules I have a single test without a title. Would be nice if it only showed 1 test passed
and not โ [anonymous]
in that case.
Transform async functions to generators using http://babeljs.io/docs/advanced/transformers/other/async-to-generator/.
Hi
The t.same
method uses assert.deepEqual
. This is also true when primitives do not have the same type. When using assert.deepStrictEqual
, this is not the case.
assert.deepEqual({foo: 5}, {foo: '5'});
// true
assert.deepStrictEqual({foo: 5}, {foo: '5'});
// false
Same for t.notSame
.
If for some reason, the implementation should be kept this way, new assertions can be added? t.strictSame
and t.notStrictSame
or something like that.
Would be very useful and insanely cool, if AVA supported generators by default. I am ready to implement this right away. What does everyone think of that?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.