Coder Social home page Coder Social logo

lab's Introduction

@hapi/lab

Node test utility.

lab is part of the hapi ecosystem and was designed to work seamlessly with the hapi web framework and its other components (but works great on its own or with other frameworks). If you are using a different web framework and find this module useful, check out hapi – they work even better together.

Visit the hapi.dev Developer Portal for tutorials, documentation, and support

Useful resources

lab's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lab's Issues

Add tests

We're totally lacking tests of our testing framework.. Seems bad.

Tests fail silently, code coverage the only sign of something wrong

I had some issues figuring out what was wrong with my tests over on hapijs/catbox#43. It looks like since most of the tests are async and the asserts are inside of callbacks, the tests fail silently if a callback doesn't fire. Obviously there was an issue with my code but didn't have a good understanding of that from the test output.

The only thing that let me know that there was any issue was the code coverage.

This issue could be solved by more emphasis on code coverage and maybe check if no asserts were called or even checked, then there is most likely a problem with a test.

support mechanism to do 1 run, but output to different reporters

When utilizing lab, it would be ideal to do one test run and get some results output (json?). After this, when utilizing tap, threshold, or whatever custom reporter, it would be good to re-utilize the results from the actual run so you don't have to re-run the tests. The goal here is to optimize the time if you use more than 1 reporter, and to use a consistent result set for those different reporters.

You could alternatively do something like support multiple reporters in the same command line but then you would need to also have file names with each of those reporters. Any thoughts on best approaches here or is this something that just isn't useful to anyone else? I have seen similar feature requests in mocha.

Unable to use global CLI

When typing lab into the command line, to utilise my globally installed version of lab, I always receive the following output.

    tests complete (1 ms)
    No global variable leaks detected.

However running a local version, in the case of this repo node ./bin/lab, or in the case of hapi's repo node ./node_modules/lab/bin/lab, I receive the correct test run output.

Putting console.log() into the code has got me as far as line 485 in execute.js
I can log a message out right before the setImmediate call on line 486, but anything inside the function passed to setImmediate does not get called. Line 487 does not appear to be reached.

Commenting out the setImmediate call on line 468 obviously allows line 488 to be executed inline, but this never returns so line 489 never gets executed.

I honestly don't know enough about how domains work, or how lab works, to get any further than this in the time that I have. For now I am going to follow the spumko approach of running tests by referencing the locally installed module.

Allow specifying tests by id

Change console report to show test id instead of failure counter.
Support the new -i or --id command line argument for specifying the ids to run.

fix for readme

npm install lab -g is needed to use command line the "-g" is important

Add option to set acceptable global variables

Currently, there is no way to set a bunch of acceptable global variables.

For example, when using traceur, 'Symbol', '$traceurRuntime', 'System', 'Promise', 'Map', 'traceur' are required global variables.
It could be a solution to support traceur as a specific library, like it's done for blanket or ArrayBuffer, but it could be useull to provide a more general mecanism, like mocha does.

tests should fail if before etc call back with an error or crash

If before, beforeEach, after, or afterEach call back with an error, the test should fail.

Consider the following experiment:

var lab = require('lab');

lab.experiment('when \'before\' calls back with an error', function() {
    lab.before(function (done) {
        done(new Error('should fail'));
    });

    lab.test('the test runs and passes anyway', function (done) {
        console.error('oops');
        done();
    });
});

I'd expect that to log "Error: should fail", fail the test without running it, and exit with a non-zero code.

Under [email protected], though:

$ lab -v foo.js
should fail
Error: should fail
    at /private/tmp/foo/foo.js:5:14
    at Object._onImmediate (/private/tmp/foo/node_modules/lab/lib/execute.js:492:17)
    at processImmediate [as _immediateCallback] (timers.js:336:15)
oops
when 'before' calls back with an error
  ✔ 1) the test runs and passes anyway (3 ms)


 1 tests complete (8 ms)

 No global variable leaks detected.

$ echo $?
0

lab logs the error (good), runs the test anyway (bad), and exits cleanly (bad).

Similarly:

    lab.before(function (done) {
        throw new Error('should fail');
    });

… should fail tests, but doesn't.

Weird Error 1 on Failure

I put together a simple test -- expect(1+1).to.equal(2). When I ran the test, it worked just fine. But when I altered the test to fail -- expect(2+1).to.equal(2), I got the following error after the code coverage:

npm ERR! weird error 1
npm WARN This failure might be due to the use of legacy binary "node"
npm WARN For further explanations, please read
/usr/share/doc/nodejs/README.Debian

npm ERR! not ok code 0

I am running on Ubuntu, but I added a symlink for nodejs to node, so the legacy binary shouldn't be an issue. I normally use "node" from the command line.

Any ideas?

Mongoose call fails silently

I have a before() method which calls an action on a Mongoose model:

before(function(done) {
  User.find({}, function(err, users) {
    console.log('TEST');
    done();
  });
});

TEST is never written to the console and execution of the test suite stops at this point. If I call done() outside of the callback, then execution continues:

before(function(done) {
  User.find({}, function(err, users) {
    console.log('TEST');
  });
  done();
});

The issue seems to be Mongoose related, because other async callbacks work as expected, for example:

before(function(done) {
  setTimeout(function () { done(); }, 1000);
});

before/beforeEach/afterEach/after not run in correct order when using nested experiments

Consider the following test:

var Lab = require('lab');

Lab.experiment('calculator', function () {

    Lab.before(function (done) {
        console.log('called: before');

        done();
    });

    Lab.beforeEach(function (done) {
        console.log('called: beforeEach');

        done();
    });

    Lab.afterEach(function (done) {
        console.log('called: afterEach');

        done();
    });

    Lab.after(function (done) {
        console.log('called: after');

        done();
    });

    Lab.test('returns true when zero', function (done) {
        console.log('called: test');
        Lab.expect(0).to.equal(0);
        done();
    });

    Lab.experiment('addition', function () {

        Lab.test('returns true when 1 + 1 equals 2', function (done) {
            console.log('called: test');
            Lab.expect(1+1).to.equal(2);
            done();
        });

    });

    Lab.experiment('subtract', function () {

        Lab.test('returns true when 1 - 1 equals 0', function (done) {
            console.log('called: test');
            Lab.expect(1-1).to.equal(0);
            done();
        });

    });
});

Run with tap (lab examples/nested_experiments.js -r tap) and you get the following output:

1..3
called: before
called: beforeEach
called: test
ok 1 calculator returns true when zero
called: afterEach
called: after
called: test
ok 2 calculator addition returns true when 1 + 1 equals 2
called: test
ok 3 calculator subtract returns true when 1 - 1 equals 0
# tests 3
# pass 3
# fail 0

expected output should be:

1..3
called: before
called: beforeEach
called: test
ok 1 calculator returns true when zero
called: afterEach
called: beforeEach
called: test
ok 2 calculator addition returns true when 1 + 1 equals 2
called: afterEach
called: beforeEach
called: test
ok 3 calculator subtract returns true when 1 - 1 equals 0
called: afterEach
called: after
# tests 3
# pass 3
# fail 0

Note: I've used tap for the simplistic output - its not a symptom of that formatter, more so structure of the experiments built in index.js.

How to write test for required file parameters

Hi,
I am trying to create a test for API call with mandatory file parameter. When I try to do this using the real API call using following form:

    <h2>Simple upload / POST</h2>
    <form action="http://localhost:8000/api/upload/simple" method="post"
enctype="multipart/form-data">
      Name: <input type="file" name="file" /> <br /> 
      <input type="submit" value="Call" />
    </form>

It is working as expected, but if I try to write test for that, it is not working.

I am using following route:

  server.route({
    method: 'POST',
    path: '/api/upload/simple',
    config: {
      validate : {
        payload: {file: Joi.object().required()}
      },
      handler: function(request, reply) {
        // Some upload functionality.
        reply('Done').type('text/plain');
      },
    }
  });

And here is the test I trying to be succesfull:

describe('simple upload', function() {
  var server = Hapi.createServer(0);
  before(function(done) {
    routes.attachHandlers(server);
    server.start(done);
  });

  it('successfull', function(done) {
    var payload = {
      file : fs.readFileSync('./test_file')
    };
    server.inject({url : '/api/upload/simple', method: 'post',  payload : payload}, function(res) {
      console.log(res);
      //expect(res.statusCode).to.equal(200);
      done();
    });
  });
});

In the response is filed following result:

{ statusCode: 400,
     error: 'Bad Request',
     message: 'the value of file must be an object',
     validation: { source: 'payload', keys: [] } } }

How should I simulate the file field to be object? Right now the file is filled with an instance of Buffer.

thanks

Discuss the addition of formatters to lab for CI consumption

My team is planning to manage our Hapi deployments with Jenkins. We want to use our test output to verify the deployments. However the current version of Lab does not support xUnit output or really anything verbose enough to specify both coverage as well as the success/failure of each tests.

I was looking at the following options to take:

  1. Write a parser to consume the normal Lab console output or json output and convert to xUnit. The problem with the former option is that console does not output very much information about the tests that have been run (name, suite, etc) - only details about the failures. The JSON output seems to fix this, except it does not provide an 'error' category that explains why a given test has failed.

2)Pull-request lab to add either an additional formatter or add a verbose setting on the console output.

I was hoping to get some input on what would be the better solution. I was wondering if there were already plans to add addition formatters to lab? if not, what would you suggest?

Support beforeEach() (and afterEach())

In order to really make tests independent of each other it would be nice to have a beforeEach() (and afterEach()) wrapper for a function that is to be called before each of the tests inside of a describe() block.

Lab doesn't show the assertion message when an assertion fails

When I run lab with the console reporter and an assertion like this fails:

expect(true).to.eql(false, "hello world");

Then it will not show the message "hello world" in the output but only that true isn't equal false.

Note: This might also occur with other reporters

Lab not working

When I run

lab test.js it seems that lab isnt working

however if I do

node_modules/lab/bin/lab test.js

it works

It seems that lab only works if executed from the bin. Looks like you are invoking this the same way in hapi too?

Support spec reporter

On long running tests when a test fails its useful to know which test it was that failed instead of having to wait for all of the tests to run. Alternatively support a --bail option that stops on the first failed test.

allow running test suites with bare node

node-tap allows you to do this, which makes it super simple to debug failing test cases: node debug test.tap.js. Something like this in lab would be similarly useful. I can use node debug ./node_modules/.bin/lab test.tap.js but that's not obvious to a lot of people.

Watch command

Would like a simple way to watch the test folder for changes, and re-run when something is updated.

I guess it'd be simple to wrap this call around something and come up with a DIY solution, but would be great if it was built in.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.