Coder Social home page Coder Social logo

test's People

Contributors

bkonyi avatar blois avatar dantup avatar dependabot[bot] avatar devoncarew avatar dgrove avatar efortuna avatar floitschg avatar gramster avatar grouma avatar jakemac53 avatar joshualitt avatar keertip avatar kevmoo avatar lrhn avatar matanlurey avatar mkustermann avatar munificent avatar natebosch avatar nex3 avatar parlough avatar peter-ahe-google avatar pq avatar ricowind avatar rmacnak-google avatar scheglov avatar sigmundch avatar srawlins avatar stereotype441 avatar wibling avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

test's Issues

setUpAll and tearDownAll

See: #17 (comment)

Testers may want that higher level setUp and tearDown that only execute before the first test, and after the final test.

Example:
Someone may want a robust test suite that starts a selenium server only at the beginning of the suite if one is not already running, and then kill the server after all tests have completed.

Support platform constraints

Test files should be able to declare which platforms they're compatible with. One nice way to do this would be via annotations, probably something like:

@Platform(Platform.all, except: Platform.firefox)
void main() {
  // ...
}

The API could certainly bear some exploration.

Support choosing tests via the command line

The command-line API should provide a means to choose which tests to run. Probably the most straightforward way to do this would be to just have a --name/-n command-line flag that takes a regular expression that matches test names.

Don't print colors when ANSI codes are not supported

This is how the output looks like in WebStorm debug console:

debug:55618 --break-at-isolate-spawn --enable-vm-service:40572 --trace_service_pause_events /tmp/jetbrains_unit_config.dart
Testing started at 3:55 PM ...
Observatory listening on http://127.0.0.1:40572
00:00 �[32m+0�[0m�[31m -1�[0m: bwu_datastore_connection.test.model basic�[0m                                              
  type '() => Entity' is not a subtype of type 'Entity' of 'value'.
  dart:core         List.addAll
  model.dart 29:24  main.<fn>.<fn>
  dart:isolate      _RawReceivePortImpl._handleMessage

00:00 �[32m+0�[0m�[31m -1�[0m: �[31mSome tests failed.�[0m                                                                     

Process finished with exit code 0

Other packages run into this as well. pub itself is one of these where it works well.
see also

Facilitate code coverage in test runner

I'm sure you guys are thinking about this but I didn't see an open issue to talk about it. The alpha runner looks great but would like to see the ability to generate test coverage from it by providing a remote debugging port (or some other more elegant solution!).

Add a test runner

We want to make something that can be invoked from the command line to automatically run all the tests in the package. An important component of this is separating out configuration from the actual test code. Right now, the way tests are run and output is formatted is controlled entirely by the test file itself via functions like solo_test and useCompactVmConfig. These should be controlled by the test runner instead.

setUp/tearDown should be only called once per group

when I have

main
   group
     setUp
     tearDown
     test1
     test2
     testx

it's executed like

  • setUp
  • test1
  • tearDown
  • setUp
  • test2
  • tearDown
  • setUp
  • testx
  • tearDown

This doesn't make sense. For such a behavior I can just use plain functions and call them from within testx.

When I need to start a server before running a group of tests and shut the server down after all test are finished, I don't see a way to do this in the current unittest implementation (stable and 0.12.0-alpha.0)

Reset test timeouts with a heartbeat

The timeout used when running a test should reset whenever unittest detects that the test is running. That way deadlocks can be avoided but tests that just naturally take a long time to run don't need manual adjustment.

0.11.* reset the timer whenever an expectAsync function was called, but we could potentially use the Zone API to reset it whenever any asynchronous callback was called.

Make the test timeout configurable

Currently, tests have a hard-coded 30-second timeout. This should be configurable both from the test itself (possibly in the form of saying "give me 2x the default timeout") and as a command-line flag in the test runner.

Consider to run testrunner as a service and provide an API

  • watch the test directory for changes
  • provide a list of tests/groups found (maybe with additional changes, browser/command line, tags)
  • allow to run a set of tests
  • provide the test result (progress)
  • allow to subscribe to updates (notification for new/modified/deleted tests)

This would allow to easily build a nice GUI or control it from an IDE

Support browser tests

As it stands, the test runner (#2) can only run VM tests. It should also be able to run tests on the browser—both on Dartium and compiled to JS on other browsers. This should integrate well into the existing command-line runner API, probably with a Karma-style client/server model.

Support for tags

During development I won't care about a lot of tests, especially integration tests (which are often slow(er)). By using tags I could declare sets of unit tests I care about and are likely to break because of what I'm working on.
This would make it more convenient to run tests frequently.
Running all tests is more appropriate for checkin hooks and CI.

This feature could be combined with #6, #7, #14

Examples:

  • run only server tests (tag: server)
  • run only fast server tests (tags: server,unittest) (as don't run integration tests)
  • run all firefox tests (tags: firefox)
  • could be combined with #14 (tags: -skip) (exclude skip)
  • run tests for a specific feature (tags: authentication)

It should be supported to add tags at group and test level, where tags on a group apply also to tests within that group.

I just found an example where this is already done http://olivinelabs.com/busted/.

A way of printing the reproduction instructions for a browser test would be helpful

In the dart repo testing tool, when a test fails, you get a bunch of verbiage which includes

  • command line to recompile the sources
  • command line to run a server
  • command line to launch the browser and connect to the server

It's a bit painful, but lets you re-run the test and debug it in the browser. For bonus points, something that re-ran the test itself but inserted a debugger() at the beginning of the failing test would be awesome.

Support failing_test for the red phase in TDD

It would be cool if there was a way to signify that a test is supposed to fail (without changing the test body itself). This happens in the first phase of TDD.

Something like: failing_test('description', () {...}). This would allow me to checkin tests with the correct expectation and when I make the tests go green, I can just change it to the normal test without changing the test body itself.

I wish I could see output

I'd like to be able to put print statements in browser tests and see the output. Right now it seems to be suppressed.

Support pluggable loaders

There should be a way for external users to define plugins that produce Suites based on input files. It will need access to the unittest API (#48) for things like loading Dart files—it shouldn't need to re-invent the browser bootstrapping logic.

Support a machine-readable reporter

Currently the test runner can only use ConsoleReporter, which is optimized for command-line readability and would be difficult for a machine to parse. We should provide a machine-readable option as well.

One possibility would be to have a reporter that just straight-up emits JSON packets.

Support running tests with user-authored HTML

Currently tests are always run with a unittest-provided HTML shim. The user should be able to provide their own HTML page. In practice, most people who do this with the old unittest have whatever_test.html alongside whatever_test.dart, so we should just detect that pattern.

Clean up temp directories on SIGTERM

When the test runner receives a SIGTERM signal, it should be sure to clean up all its resources to avoid leaving around temp directories.

It's possible that it should also run the tear-down function for the current test, so that it can clean up its resources as well. If it does so, it would need to find a way to terminate the test as quickly as possible.

Support pluggable reporters

There should be a means for third parties to create reporters that are dynamically loaded and used by unittest.

This is particularly tricky because reporters are likely to be desired on a user-by-user rather than project-by-project basis, so there's some friction to just putting it into a project's pubspec.

Add a package-level config file

As more and more aspects of tests become configurable in various ways, it will become necessary to have project-level configuration files. These might be used for configuration that would otherwise be included declaratively on tests, since if all tests would share the same configuration it would be a drag to mark it over and over again. They might also be used for configuration that would otherwise be passed on the command-line, to make it easy to provide custom defaults on a per-package basis.

Some things the config file should definitely support:

  • Default platform(s) to run tests on.
  • Default files to consider tests.
  • Default test timeout (see #9).
  • Default reporter (see #12).
  • Configuration for various plugins.

As time goes on, more configuration will likely be added as well.

Provide a programmatic API

Currently the only APIs that unittest exposes publicly are the frontend APIs used to define tests: test(), group(), expect(), and so on. However, it has a lot of backend code that would be useful for external packages to extend the platform. Plugins are the most obvious example of this, since they need to at least implement some backend classes and likely call out to other APIs, but one could also imagine something like a custom runner that spins up a long-lived server and re-runs tests when their sources change.

Add stream matchers

Currently there's no good way to validate stream values other than calling toList() and validating that. This isn't a great option for ensuring that streams produce the right values at the right times, or for streams that never close.

scheduled_test's ScheduledStream matchers are a good baseline for functionality. We don't want to depend on ScheduledStream, but the async package has a StreamQueue class that provides similar functionality without being tightly coupled with scheduled_test. For maximum flexibility, the new stream matchers should work against either a raw Stream or a StreamQueue.

It would also be good to support a both(streamMatcher1, streamMatcher2) matcher, which asserts that both matchers match the stream. Like either(), the stream should be consumed up to the maximum consumed by both matchers.

Support debugging browser tests

This is the meta-issue tracking baseline support for browser debugging. The goal is to make it possible to debug all supported browsers using whatever built-in debug tools (Observatory and/or dev tools) they support.

  • Support pause after load in visible browsers: #294
  • Expose the Observatory URL for Dartium and content shell: #295
  • Expose the remote debugger URL for PhantomJS: #296
  • Expose the remote debugger URL for content shell: #297

File not found bug

I was trying out the alpha version of unittest 0.12 on Windows 8 and got this output:

C:\Users\Oliver\Projects\project-z>pub run unittest:unittest
Failed to load "test\ai_state_machine_test.dart":
Cannot open file, path = 'C:\Users\Oliver\Projects\project-z\test%5Cpackages\uni
ttest\src\runner\vm\isolate_listener.dart' (OS Error: Das System kann den angege
benen Pfad nicht finden.
, errno = 3)

Which means "File not found". The file is on disk, but the path is wrong. I'm not sure where the escape sequence %5C is comming from.

Update the README

The README needs to be rewritten to discuss the new runner API and infrastructure.

Support debugging VM tests

The test runner should make it very easy to run an interactive debugger for VM tests, whether it be via the command line, observatory, or an IDE. This is blocked on issue 23320, which will allow the test runner to enable the observatory server at runtime.

Support Dart 1.8.0

Currently the test runner relies on Isolate.kill(), which is only available on Dart 1.9.0. In order to make it easier for users to adopt the new runner, we should add a fallback that doesn't rely on 1.9.0-specific behavior.

Support a headless browser

At least one headless browser should be supported so users can run their tests without popping up an unwieldy window. PhantomJS is probably the best option here, since it's widely-used and easy to install, but we could also do content_shell.

Gracefully handle tests that print

Currently, if a test prints it interacts poorly with the single-line output of the ConsoleRunner. We should use Zone.print to capture those prints, ensure that we aren't in the middle of a line, then print them.

Add a doesNotComplete matcher

unittest should provide a doesNotComplete matcher to use against Futures which asserts that they never fire and produces a useful error if they do.

"Never" is a tricky concept to capture, but doing something like the pumpEventQueue() function that's floating around should be good enough for most uses.

Command to initialize a test file

Note: This could be an IDE feature, but this package will have more knowledge about how to do it, and this way it would be automatically kept up to date.

In package foo, I could run:

pub run test:init src/bar

and it would check if lib/src/bar.dart exists. If not, it will create it, something like:

library foo.src.bar;

// TODO: Library auto-generated by `pub run test:init`.

and would then create a test file for it at test/src/bar_test.dart:

library foo.test.src.bar;

import 'package:foo/src/bar.dart';
import 'package:test/test.dart';

main() {
  group('<group>', () {
    test('<test>', () {
      fail('Auto generated by `pub run test:init`.');
    }
  }
}

Support the Dart analyzer as a pseudo-platform

It would be nice to be able to run the analyzer via the test runner and have any errors (or, configurably, warnings or hints) looks like test failures. It shouldn't be treated exactly like other platforms, since we want it to analyze every Dart file (even those in lib/) once and only once, but driving it through the test runner would be useful.

Gracefully handle load errors

Currently, if any test file fails to load the entire runner process exits. This isn't useful; a load failure should cause an unsuccessful exit code, but other tests should still run in the meantime.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.