dart-lang / test Goto Github PK
View Code? Open in Web Editor NEWA library for writing unit tests in Dart.
Home Page: https://pub.dev/packages/test
A library for writing unit tests in Dart.
Home Page: https://pub.dev/packages/test
Once we support browser tests, we should use the source maps generated by dart2js to convert their JS stack traces back into Dart traces. @cbracken has some code to do this, but it's not currently written in Dart.
See: #17 (comment)
Testers may want that higher level setUp and tearDown that only execute before the first test, and after the final test.
Example:
Someone may want a robust test suite that starts a selenium server only at the beginning of the suite if one is not already running, and then kill the server after all tests have completed.
Test files should be able to declare which platforms they're compatible with. One nice way to do this would be via annotations, probably something like:
@Platform(Platform.all, except: Platform.firefox)
void main() {
// ...
}
The API could certainly bear some exploration.
The command-line API should provide a means to choose which tests to run. Probably the most straightforward way to do this would be to just have a --name
/-n
command-line flag that takes a regular expression that matches test names.
This is how the output looks like in WebStorm debug console:
debug:55618 --break-at-isolate-spawn --enable-vm-service:40572 --trace_service_pause_events /tmp/jetbrains_unit_config.dart
Testing started at 3:55 PM ...
Observatory listening on http://127.0.0.1:40572
00:00 �[32m+0�[0m�[31m -1�[0m: bwu_datastore_connection.test.model basic�[0m
type '() => Entity' is not a subtype of type 'Entity' of 'value'.
dart:core List.addAll
model.dart 29:24 main.<fn>.<fn>
dart:isolate _RawReceivePortImpl._handleMessage
00:00 �[32m+0�[0m�[31m -1�[0m: �[31mSome tests failed.�[0m
Process finished with exit code 0
Other packages run into this as well. pub
itself is one of these where it works well.
see also
I saw that skip_*
was deprecated. I think it is important to see if and how many tests are disabled, and it should be reported in the result output.
See #17
setUp
and tearDown
are designed to be run for each test.
It is common to want to share state between individual tests – without leaking the state when all associated tests are finished.
sharedSetup
/ sharedTearDown
?
I'm sure you guys are thinking about this but I didn't see an open issue to talk about it. The alpha runner looks great but would like to see the ability to generate test coverage from it by providing a remote debugging port (or some other more elegant solution!).
We want to make something that can be invoked from the command line to automatically run all the tests in the package. An important component of this is separating out configuration from the actual test code. Right now, the way tests are run and output is formatted is controlled entirely by the test file itself via functions like solo_test
and useCompactVmConfig
. These should be controlled by the test runner instead.
All the desktop browsers supported by Dart should be supported by the test runner.
For our use case we want to be able to provide a default HTML file instead of one for every test file. See have this PR into test_runner.dart googlearchive/test_runner.dart#35 for how we are accomplishing that there.
e.g.
JsFunction
notification_test.dart.js:12387
00:00 �[32m+1�[0m: constructor exists�[0m
notification_test.dart.js:12387
00:00 �[32m+2�[0m: A group of tests First Test�[0m
notification_test.dart.js:12387
00:00 �[32m+2�[0m: All tests passed!�[0m
when I have
main
group
setUp
tearDown
test1
test2
testx
it's executed like
This doesn't make sense. For such a behavior I can just use plain functions and call them from within testx.
When I need to start a server before running a group of tests and shut the server down after all test are finished, I don't see a way to do this in the current unittest implementation (stable and 0.12.0-alpha.0)
The timeout used when running a test should reset whenever unittest detects that the test is running. That way deadlocks can be avoided but tests that just naturally take a long time to run don't need manual adjustment.
0.11.*
reset the timer whenever an expectAsync
function was called, but we could potentially use the Zone
API to reset it whenever any asynchronous callback was called.
Currently, tests have a hard-coded 30-second timeout. This should be configurable both from the test itself (possibly in the form of saying "give me 2x the default timeout") and as a command-line flag in the test runner.
This would allow to easily build a nice GUI or control it from an IDE
During development I won't care about a lot of tests, especially integration tests (which are often slow(er)). By using tags I could declare sets of unit tests I care about and are likely to break because of what I'm working on.
This would make it more convenient to run tests frequently.
Running all tests is more appropriate for checkin hooks and CI.
This feature could be combined with #6, #7, #14
Examples:
server
)server,unittest
) (as don't run integration
tests)firefox
)-skip
) (exclude skip
)authentication
)It should be supported to add tags at group and test level, where tags on a group apply also to tests within that group.
I just found an example where this is already done http://olivinelabs.com/busted/.
In the dart repo testing tool, when a test fails, you get a bunch of verbiage which includes
It's a bit painful, but lets you re-run the test and debug it in the browser. For bonus points, something that re-ran the test itself but inserted a debugger() at the beginning of the failing test would be awesome.
In the event an exception is thrown in a test, most test frameworks will catch the exception, fail the test, run tearDown, then rethrow the exception.
Is there any reason that this would be a bad idea?
It would be cool if there was a way to signify that a test is supposed to fail (without changing the test body itself). This happens in the first phase of TDD.
Something like: failing_test('description', () {...}). This would allow me to checkin tests with the correct expectation and when I make the tests go green, I can just change it to the normal test without changing the test body itself.
I'd like to be able to put print statements in browser tests and see the output. Right now it seems to be suppressed.
There should be a way for external users to define plugins that produce Suite
s based on input files. It will need access to the unittest
API (#48) for things like loading Dart files—it shouldn't need to re-invent the browser bootstrapping logic.
Currently the test runner can only use ConsoleReporter
, which is optimized for command-line readability and would be difficult for a machine to parse. We should provide a machine-readable option as well.
One possibility would be to have a reporter that just straight-up emits JSON packets.
Hitting issues w/ the Editor open while running tests
By default, just listen on an arbitrary port instead of hard-wiring 8080
Currently tests are always run with a unittest
-provided HTML shim. The user should be able to provide their own HTML page. In practice, most people who do this with the old unittest have whatever_test.html
alongside whatever_test.dart
, so we should just detect that pattern.
When the test runner receives a SIGTERM
signal, it should be sure to clean up all its resources to avoid leaving around temp directories.
It's possible that it should also run the tear-down function for the current test, so that it can clean up its resources as well. If it does so, it would need to find a way to terminate the test as quickly as possible.
There should be a means for third parties to create reporters that are dynamically loaded and used by unittest.
This is particularly tricky because reporters are likely to be desired on a user-by-user rather than project-by-project basis, so there's some friction to just putting it into a project's pubspec.
As more and more aspects of tests become configurable in various ways, it will become necessary to have project-level configuration files. These might be used for configuration that would otherwise be included declaratively on tests, since if all tests would share the same configuration it would be a drag to mark it over and over again. They might also be used for configuration that would otherwise be passed on the command-line, to make it easy to provide custom defaults on a per-package basis.
Some things the config file should definitely support:
As time goes on, more configuration will likely be added as well.
Currently the only APIs that unittest
exposes publicly are the frontend APIs used to define tests: test()
, group()
, expect()
, and so on. However, it has a lot of backend code that would be useful for external packages to extend the platform. Plugins are the most obvious example of this, since they need to at least implement some backend classes and likely call out to other APIs, but one could also imagine something like a custom runner that spins up a long-lived server and re-runs tests when their sources change.
Currently there's no good way to validate stream values other than calling toList()
and validating that. This isn't a great option for ensuring that streams produce the right values at the right times, or for streams that never close.
scheduled_test
's ScheduledStream
matchers are a good baseline for functionality. We don't want to depend on ScheduledStream
, but the async
package has a StreamQueue
class that provides similar functionality without being tightly coupled with scheduled_test
. For maximum flexibility, the new stream matchers should work against either a raw Stream
or a StreamQueue
.
It would also be good to support a both(streamMatcher1, streamMatcher2)
matcher, which asserts that both matchers match the stream. Like either()
, the stream should be consumed up to the maximum consumed by both matchers.
This is the meta-issue tracking baseline support for browser debugging. The goal is to make it possible to debug all supported browsers using whatever built-in debug tools (Observatory and/or dev tools) they support.
For the most part the existing bash globbing support covers this, but the ability to use **
would be nice.
pub run unittest:unittest
reports No tests run.
when main()
is async.
main() async {
test("test1", () {
...
});
}
Currently, the prints_matcher_test
seems to be failing specifically on Safari. See the test logs.
I was trying out the alpha version of unittest 0.12 on Windows 8 and got this output:
C:\Users\Oliver\Projects\project-z>pub run unittest:unittest
Failed to load "test\ai_state_machine_test.dart":
Cannot open file, path = 'C:\Users\Oliver\Projects\project-z\test%5Cpackages\uni
ttest\src\runner\vm\isolate_listener.dart' (OS Error: Das System kann den angege
benen Pfad nicht finden.
, errno = 3)
Which means "File not found". The file is on disk, but the path is wrong. I'm not sure where the escape sequence %5C is comming from.
Once browser tests land, they won't have any support for accessing assets in the host package beyond the test file itself. One way or another, this access should be provided.
The README needs to be rewritten to discuss the new runner API and infrastructure.
The test runner should make it very easy to run an interactive debugger for VM tests, whether it be via the command line, observatory, or an IDE. This is blocked on issue 23320, which will allow the test runner to enable the observatory server at runtime.
Currently the test runner relies on Isolate.kill()
, which is only available on Dart 1.9.0. In order to make it easier for users to adopt the new runner, we should add a fallback that doesn't rely on 1.9.0-specific behavior.
At least one headless browser should be supported so users can run their tests without popping up an unwieldy window. PhantomJS is probably the best option here, since it's widely-used and easy to install, but we could also do content_shell
.
I get version conflicts because because I want to use shelf 0.6.0 in my project.
wrap the execution of each test with Chain.capture to get proper stack traces from failed tests.
It seems currently the code for each test needs to be wrapped, which is cumbersome and clutters the tests.
Currently, if a test prints it interacts poorly with the single-line output of the ConsoleRunner
. We should use Zone.print
to capture those prints, ensure that we aren't in the middle of a line, then print them.
unittest
should provide a doesNotComplete
matcher to use against Future
s which asserts that they never fire and produces a useful error if they do.
"Never" is a tricky concept to capture, but doing something like the pumpEventQueue()
function that's floating around should be good enough for most uses.
Even if starting up pub serve
is a manual process.
Note: This could be an IDE feature, but this package will have more knowledge about how to do it, and this way it would be automatically kept up to date.
In package foo
, I could run:
pub run test:init src/bar
and it would check if lib/src/bar.dart
exists. If not, it will create it, something like:
library foo.src.bar;
// TODO: Library auto-generated by `pub run test:init`.
and would then create a test file for it at test/src/bar_test.dart
:
library foo.test.src.bar;
import 'package:foo/src/bar.dart';
import 'package:test/test.dart';
main() {
group('<group>', () {
test('<test>', () {
fail('Auto generated by `pub run test:init`.');
}
}
}
It would be nice to be able to run the analyzer via the test runner and have any errors (or, configurably, warnings or hints) looks like test failures. It shouldn't be treated exactly like other platforms, since we want it to analyze every Dart file (even those in lib/
) once and only once, but driving it through the test runner would be useful.
Currently, if any test file fails to load the entire runner process exits. This isn't useful; a load failure should cause an unsuccessful exit code, but other tests should still run in the meantime.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.