hapijs / lab Goto Github PK
View Code? Open in Web Editor NEWNode test utility
License: Other
Node test utility
License: Other
I.e. the --grep
option provided by mocha. Allows for iteration on a specific tests without having to wait for all of the other tests to execute.
Jade comes with too much crap
When I run lab with the console reporter and an assertion like this fails:
expect(true).to.eql(false, "hello world");
Then it will not show the message "hello world" in the output but only that true
isn't equal false
.
Note: This might also occur with other reporters
I have a before()
method which calls an action on a Mongoose model:
before(function(done) {
User.find({}, function(err, users) {
console.log('TEST');
done();
});
});
TEST
is never written to the console and execution of the test suite stops at this point. If I call done()
outside of the callback, then execution continues:
before(function(done) {
User.find({}, function(err, users) {
console.log('TEST');
});
done();
});
The issue seems to be Mongoose related, because other async callbacks work as expected, for example:
before(function(done) {
setTimeout(function () { done(); }, 1000);
});
Not needed with new spec reporter
Would like a simple way to watch the test folder for changes, and re-run when something is updated.
I guess it'd be simple to wrap this call around something and come up with a DIY solution, but would be great if it was built in.
On long running tests when a test fails its useful to know which test it was that failed instead of having to wait for all of the tests to run. Alternatively support a --bail option that stops on the first failed test.
In the spirit of lab itself, reducing usage of feature-rich outside modules to maintain simplicity and control.
Just for @isaacs
Jest's pattern for supporting other languages is nice: http://facebook.github.io/jest/docs/tutorial-coffeescript.html#content
Lab is not displaying error when done() is called more than once. Instead, undefined
is shown multiple times.
node-tap allows you to do this, which makes it super simple to debug failing test cases: node debug test.tap.js
. Something like this in lab would be similarly useful. I can use node debug ./node_modules/.bin/lab test.tap.js
but that's not obvious to a lot of people.
When I run
lab test.js it seems that lab isnt working
however if I do
node_modules/lab/bin/lab test.js
it works
It seems that lab only works if executed from the bin. Looks like you are invoking this the same way in hapi too?
Replaces hapijs/boom#7
I had some issues figuring out what was wrong with my tests over on hapijs/catbox#43. It looks like since most of the tests are async and the asserts are inside of callbacks, the tests fail silently if a callback doesn't fire. Obviously there was an issue with my code but didn't have a good understanding of that from the test output.
The only thing that let me know that there was any issue was the code coverage.
This issue could be solved by more emphasis on code coverage and maybe check if no asserts were called or even checked, then there is most likely a problem with a test.
Consider the following test:
var Lab = require('lab');
Lab.experiment('calculator', function () {
Lab.before(function (done) {
console.log('called: before');
done();
});
Lab.beforeEach(function (done) {
console.log('called: beforeEach');
done();
});
Lab.afterEach(function (done) {
console.log('called: afterEach');
done();
});
Lab.after(function (done) {
console.log('called: after');
done();
});
Lab.test('returns true when zero', function (done) {
console.log('called: test');
Lab.expect(0).to.equal(0);
done();
});
Lab.experiment('addition', function () {
Lab.test('returns true when 1 + 1 equals 2', function (done) {
console.log('called: test');
Lab.expect(1+1).to.equal(2);
done();
});
});
Lab.experiment('subtract', function () {
Lab.test('returns true when 1 - 1 equals 0', function (done) {
console.log('called: test');
Lab.expect(1-1).to.equal(0);
done();
});
});
});
Run with tap (lab examples/nested_experiments.js -r tap
) and you get the following output:
1..3
called: before
called: beforeEach
called: test
ok 1 calculator returns true when zero
called: afterEach
called: after
called: test
ok 2 calculator addition returns true when 1 + 1 equals 2
called: test
ok 3 calculator subtract returns true when 1 - 1 equals 0
# tests 3
# pass 3
# fail 0
expected output should be:
1..3
called: before
called: beforeEach
called: test
ok 1 calculator returns true when zero
called: afterEach
called: beforeEach
called: test
ok 2 calculator addition returns true when 1 + 1 equals 2
called: afterEach
called: beforeEach
called: test
ok 3 calculator subtract returns true when 1 - 1 equals 0
called: afterEach
called: after
# tests 3
# pass 3
# fail 0
Note: I've used tap for the simplistic output - its not a symptom of that formatter, more so structure of the experiments built in index.js.
Currently, there is no way to set a bunch of acceptable global variables.
For example, when using traceur, 'Symbol', '$traceurRuntime', 'System', 'Promise', 'Map', 'traceur' are required global variables.
It could be a solution to support traceur as a specific library, like it's done for blanket or ArrayBuffer, but it could be useull to provide a more general mecanism, like mocha does.
We're totally lacking tests of our testing framework.. Seems bad.
For example, in hapijs/hapi#829, this issue would have been detected earlier if global leaks were checked.
Enable coverage and use JSON reporter instead.
I put together a simple test -- expect(1+1).to.equal(2). When I ran the test, it worked just fine. But when I altered the test to fail -- expect(2+1).to.equal(2), I got the following error after the code coverage:
npm ERR! weird error 1
npm WARN This failure might be due to the use of legacy binary "node"
npm WARN For further explanations, please read
/usr/share/doc/nodejs/README.Debian
npm ERR! not ok code 0
I am running on Ubuntu, but I added a symlink for nodejs to node, so the legacy binary shouldn't be an issue. I normally use "node" from the command line.
Any ideas?
Change console report to show test id instead of failure counter.
Support the new -i
or --id
command line argument for specifying the ids to run.
When typing lab into the command line, to utilise my globally installed version of lab, I always receive the following output.
tests complete (1 ms)
No global variable leaks detected.
However running a local version, in the case of this repo node ./bin/lab, or in the case of hapi's repo node ./node_modules/lab/bin/lab, I receive the correct test run output.
Putting console.log() into the code has got me as far as line 485 in execute.js
I can log a message out right before the setImmediate call on line 486, but anything inside the function passed to setImmediate does not get called. Line 487 does not appear to be reached.
Commenting out the setImmediate call on line 468 obviously allows line 488 to be executed inline, but this never returns so line 489 never gets executed.
I honestly don't know enough about how domains work, or how lab works, to get any further than this in the time that I have. For now I am going to follow the spumko approach of running tests by referencing the locally installed module.
When using the coverage reporter we should total all of the missing lines and say:
Total lines missing coverage: {number}
Without any flag: .....x....
With -s: nothing
With -v: spec reporter
All three will output the results.
executing npm test prompts Assertion.includeStack is deprecated, use chai.config.includeStack instead
on console.
My team is planning to manage our Hapi deployments with Jenkins. We want to use our test output to verify the deployments. However the current version of Lab does not support xUnit output or really anything verbose enough to specify both coverage as well as the success/failure of each tests.
I was looking at the following options to take:
2)Pull-request lab to add either an additional formatter or add a verbose setting on the console output.
I was hoping to get some input on what would be the better solution. I was wondering if there were already plans to add addition formatters to lab? if not, what would you suggest?
If before
, beforeEach
, after
, or afterEach
call back with an error, the test should fail.
Consider the following experiment:
var lab = require('lab');
lab.experiment('when \'before\' calls back with an error', function() {
lab.before(function (done) {
done(new Error('should fail'));
});
lab.test('the test runs and passes anyway', function (done) {
console.error('oops');
done();
});
});
I'd expect that to log "Error: should fail", fail the test without running it, and exit with a non-zero code.
Under [email protected]
, though:
$ lab -v foo.js
should fail
Error: should fail
at /private/tmp/foo/foo.js:5:14
at Object._onImmediate (/private/tmp/foo/node_modules/lab/lib/execute.js:492:17)
at processImmediate [as _immediateCallback] (timers.js:336:15)
oops
when 'before' calls back with an error
✔ 1) the test runs and passes anyway (3 ms)
1 tests complete (8 ms)
No global variable leaks detected.
$ echo $?
0
lab
logs the error (good), runs the test anyway (bad), and exits cleanly (bad).
Similarly:
lab.before(function (done) {
throw new Error('should fail');
});
… should fail tests, but doesn't.
Hi,
I am trying to create a test for API call with mandatory file parameter. When I try to do this using the real API call using following form:
<h2>Simple upload / POST</h2>
<form action="http://localhost:8000/api/upload/simple" method="post"
enctype="multipart/form-data">
Name: <input type="file" name="file" /> <br />
<input type="submit" value="Call" />
</form>
It is working as expected, but if I try to write test for that, it is not working.
I am using following route:
server.route({
method: 'POST',
path: '/api/upload/simple',
config: {
validate : {
payload: {file: Joi.object().required()}
},
handler: function(request, reply) {
// Some upload functionality.
reply('Done').type('text/plain');
},
}
});
And here is the test I trying to be succesfull:
describe('simple upload', function() {
var server = Hapi.createServer(0);
before(function(done) {
routes.attachHandlers(server);
server.start(done);
});
it('successfull', function(done) {
var payload = {
file : fs.readFileSync('./test_file')
};
server.inject({url : '/api/upload/simple', method: 'post', payload : payload}, function(res) {
console.log(res);
//expect(res.statusCode).to.equal(200);
done();
});
});
});
In the response is filed following result:
{ statusCode: 400,
error: 'Bad Request',
message: 'the value of file must be an object',
validation: { source: 'payload', keys: [] } } }
How should I simulate the file field to be object? Right now the file is filled with an instance of Buffer.
thanks
In order to really make tests independent of each other it would be nice to have a beforeEach() (and afterEach()) wrapper for a function that is to be called before each of the tests inside of a describe() block.
Given you're exposing Chai's expect mode, would you consider exposing Chai's assert mode?
npm install lab -g is needed to use command line the "-g" is important
var a = 0;
// $lab:coverage:off$
var b = true ? 1 : 0;
// $lab:coverage:on$
var c = 2;
Does Lab support promise testing similar to Mocha?
When utilizing lab, it would be ideal to do one test run and get some results output (json?). After this, when utilizing tap, threshold, or whatever custom reporter, it would be good to re-utilize the results from the actual run so you don't have to re-run the tests. The goal here is to optimize the time if you use more than 1 reporter, and to use a consistent result set for those different reporters.
You could alternatively do something like support multiple reporters in the same command line but then you would need to also have file names with each of those reporters. Any thoughts on best approaches here or is this something that just isn't useful to anyone else? I have seen similar feature requests in mocha.
blanket outputs branchData that can be used to track coverage of ternary operators.
This would be helpful if we have a lot of tests. We have a lot of tests.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.