Coder Social home page Coder Social logo

boot-test's People

Contributors

alandipert avatar crisptrutski avatar frankiesardo avatar martinklepsch avatar micha avatar nberger avatar onetom avatar schaney avatar seancorfield avatar wagjo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

boot-test's Issues

Change filter option

I think the way the filter option needs to be passed is not very clear and more complicated than it needs to be. In Perun filter functions are just passed as code and work pretty well. I think the tradeoff might be that it's not possible to pass an option on the CLI but I haven't actually checked...

performance issue

When test task is used in a TDD workflow and you do not pass it any specific namespace, it's performance would not be judged acceptable by a TDD practitioner.

You can veryfy the issue as follows:

git clone https://github.com/magomimmo/modern-cljs.git
cd modern-cljs
git checkout se-tutorial-15
boot watch testing test -n modern-cljshopping.validators-test

Starting file watcher (CTRL-C to quit)...


Testing modern-cljs.shopping.validators-test

Ran 1 tests containing 13 assertions.
0 failures, 0 errors.
Elapsed time: 4.777 sec

then force a test failure as follows:

Modify the first unit test in the monder-cljs/test/cljc/modern_cljs/shopping/validators_test.cljc as follows:

(deftest validate-shopping-form-test
  (testing "Shopping Form Validation"
    (testing "/ Happy Path"
      (are [expected actual] (= expected actual)
           nil (validate-shopping-form "" "0" "0" "0")                 ;force a failure
           nil (validate-shopping-form "1" "0.0" "0.0" "0.0")
           nil (validate-shopping-form "100" "100.25" "8.25" "123.45")))

    ...))))

you'll see the following report:

Testing modern-cljs.shopping.validators-test

FAIL in (validate-shopping-form-test) (validators_test.cljc:9)
Shopping Form Validation / Happy Path
expected: nil
  actual: {:quantity
           ["Quantity can't be empty"
            "Quantity has to be an integer number"
            "Quantity can't be negative"]}
    diff: + {:quantity
             ["Quantity can't be empty"
              "Quantity has to be an integer number"
              "Quantity can't be negative"]}

Ran 1 tests containing 13 assertions.
1 failures, 0 errors.
clojure.lang.ExceptionInfo: Some tests failed or errored
    data: {:test 1, :pass 12, :fail 1, :error 0, :type :summary}
                clojure.core/ex-info       core.clj: 4593
   adzerk.boot-test/eval549/fn/fn/fn  boot_test.clj:   73
boot.task.built-in/fn/fn/fn/fn/fn/fn   built_in.clj:  233
   boot.task.built-in/fn/fn/fn/fn/fn   built_in.clj:  233
      boot.task.built-in/fn/fn/fn/fn   built_in.clj:  230
                 boot.core/run-tasks       core.clj:  701
                   boot.core/boot/fn       core.clj:  711
 clojure.core/binding-conveyor-fn/fn       core.clj: 1916
                                 ...
Elapsed time: 0.513 sec

As you see it takes 0.5 sec.

Now remove the forced failure:

(deftest validate-shopping-form-test
  (testing "Shopping Form Validation"
    (testing "/ Happy Path"
      (are [expected actual] (= expected actual)
           nil (validate-shopping-form "1" "0" "0" "0")                 ;fix the forced failure
           nil (validate-shopping-form "1" "0.0" "0.0" "0.0")
           nil (validate-shopping-form "100" "100.25" "8.25" "123.45")))

    ...))))

As you see it takes only 0.3 sec to get the results again:

Testing modern-cljs.shopping.validators-test

Ran 1 tests containing 13 assertions.
0 failures, 0 errors.
Elapsed time: 0.327 sec

Stop boot process and relaunch it without specifying any namespace:

boot watch testing test

Starting file watcher (CTRL-C to quit)...


Testing modern-cljs.core

Testing modern-cljs.login

Testing modern-cljs.login.validators

Testing modern-cljs.remotes

Testing modern-cljs.shopping.validators

Testing modern-cljs.shopping.validators-test

Testing modern-cljs.templates.shopping

Ran 1 tests containing 13 assertions.
0 failures, 0 errors.
Elapsed time: 8.998 sec

Now it takes 9 sec, instead of 4 sec to launch the tests. Force a failure as before.

Testing modern-cljs.core

Testing modern-cljs.login

Testing modern-cljs.login.validators

Testing modern-cljs.remotes

Testing modern-cljs.shopping.validators

Testing modern-cljs.shopping.validators-test

FAIL in (validate-shopping-form-test) (validators_test.cljc:9)
Shopping Form Validation / Happy Path
expected: nil
  actual: {:quantity
           ["Quantity can't be empty"
            "Quantity has to be an integer number"
            "Quantity can't be negative"]}
    diff: + {:quantity
             ["Quantity can't be empty"
              "Quantity has to be an integer number"
              "Quantity can't be negative"]}

Testing modern-cljs.templates.shopping

Ran 1 tests containing 13 assertions.
1 failures, 0 errors.
clojure.lang.ExceptionInfo: Some tests failed or errored
    data: {:test 1, :pass 12, :fail 1, :error 0, :type :summary}
                clojure.core/ex-info       core.clj: 4593
   adzerk.boot-test/eval549/fn/fn/fn  boot_test.clj:   73
boot.task.built-in/fn/fn/fn/fn/fn/fn   built_in.clj:  233
   boot.task.built-in/fn/fn/fn/fn/fn   built_in.clj:  233
      boot.task.built-in/fn/fn/fn/fn   built_in.clj:  230
                 boot.core/run-tasks       core.clj:  701
                   boot.core/boot/fn       core.clj:  711
 clojure.core/binding-conveyor-fn/fn       core.clj: 1916
                                 ...
Elapsed time: 5.868 sec

Now it takes almost 6 sec to rerun the tests. Fix the forced failure

Testing modern-cljs.core

Testing modern-cljs.login

Testing modern-cljs.login.validators

Testing modern-cljs.remotes

Testing modern-cljs.shopping.validators

Testing modern-cljs.shopping.validators-test

Testing modern-cljs.templates.shopping

Ran 1 tests containing 13 assertions.
0 failures, 0 errors.
Elapsed time: 5.358 sec

as you see it takes again more than 5 sec to rerun the tests.

Is that normal?

Support option to opt out of terminating on any failures

When chaining test runners together (eg. unit tests and in-browser scenario tests), it can be more illuminating to get all failures at once, not only those from the first runner to have failures.

It also seems useful to be able to hang onto the exit code until later - so that boot test test-cljs would still be able fail on CI given only boot-test failures.

Towards this end I'm considering an exit! task for boot-cljs-test, which hangs on to any prior failures. Crude, but perhaps workable with some amendments, can be seen in crisptrutski/boot-cljs-test#24

Frequent java.lang.OutOfMemoryError: Metaspace

When using boot watch test I have a very frequent error:

clojure.lang.Compiler$CompilerException: java.lang.OutOfMemoryError: Metaspace, compiling:(ragtime/protocols.clj:3:1)
             java.lang.OutOfMemoryError: Metaspace
                                ...                
               clojure.core/load/fn  core.clj: 5893
     clojure.core/load/invokeStatic  core.clj: 5892
                  clojure.core/load  core.clj: 5876
                                ...                
 clojure.core/load-one/invokeStatic  core.clj: 5697
              clojure.core/load-one  core.clj: 5692
           clojure.core/load-lib/fn  core.clj: 5737
 clojure.core/load-lib/invokeStatic  core.clj: 5736
              clojure.core/load-lib  core.clj: 5717
                                ...                
    clojure.core/apply/invokeStatic  core.clj:  648
clojure.core/load-libs/invokeStatic  core.clj: 5774
             clojure.core/load-libs  core.clj: 5758
                                ...                
    clojure.core/apply/invokeStatic  core.clj:  648
  clojure.core/require/invokeStatic  core.clj: 5796
               clojure.core/require  core.clj: 5796

I have not counted the time it takes but my feeling is every 20 mins of the tests running. I already have a pretty high configuration, which I gathered from the boot wiki (but might be off):

export BOOT_JVM_OPTIONS="-Xmx2g -client -XX:MaxMetaspaceSize=1g -XX:+TieredCompilation -XX:TieredStopAtLevel=1 -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -Xverify:none -XX:-OmitStackTraceInFastThrow"

What else can I tweak? I also briefly investigated if it was pod/pod-pool's fault in a way. It probably is, but I could not figure out way during my first attempt.

Exception when test does not pass

I'm not sure how much this is an issue but when test does not pass, strange stacktrace is shown. I've changed boot-test test to fail:

Testing adzerk.boot-test.test

FAIL in (have-you-tried) (test.clj:5)
expected: 2
  actual: 1
    diff: - 2
          + 1

Ran 1 tests containing 1 assertions.
1 failures, 0 errors.
             clojure.lang.ExceptionInfo: clojure.lang.ExceptionInfo: Some tests failed or errored {:type :summary, :fail 1, :error 0, :pass 0, :test 1}
    data: {:file
           "/var/folders/fz/r0qnzmvd69lf__k3m920myym0000gn/T/boot.user5071628562246700946.clj", :line 29}
java.util.concurrent.ExecutionException: clojure.lang.ExceptionInfo: Some tests failed or errored {:type :summary, :fail 1, :error 0, :pass 0, :test 1}
    clojure.lang.ExceptionInfo: Some tests failed or errored
        data: {:type :summary, :fail 1, :error 0, :pass 0, :test 1}
             clojure.core/ex-info                  core.clj:      4591
  adzerk.boot-test/eval453/fn/fn/fn                boot_test.clj:   66
                 boot.core/run-tasks               core.clj:       680
                            ...
              boot.user/eval513/fn  boot.user5071628562246700946.clj:   29
clojure.core/binding-conveyor-fn/fn                         core.clj: 1914

To be honest I have totally no clue what does this stacktrace mean. Is it really so informative to keep it printed each time the test fails?

test-ns-hook

clojure.test supports an alternative way to run tests when the order in which the tests are run are of importance. Below is quote from the docs that explain the mechanism. This mode of operation is not supported by boot-test. If this is something we want to support, we should add this feature, or if the sentiment is that it is not widely used, we should document the lack of support for it (in the README?).

By default, these functions will search for all tests defined in
a namespace and run them in an undefined order. However, if you
are composing tests, as in the "arithmetic" example above, you
probably do not want the "addition" and "subtraction" tests run
separately. In that case, you must define a special function
named "test-ns-hook" that runs your tests in the correct order:

(defn test-ns-hook []
  (arithmetic))

Test defined in cljx files are not recognized

Test defined in cljx files and processed via boot-cljx are not recognized by boot-test.
Combination of both tasks also increases the processing time: 10 seconds instead of less than a second if I remove the test task (but still keep the cljx task).

Namespaces being tested multiple times

When inferring namespaces from the fileset, certain namespaces may be included multiple times, due to various files appearing in multiple commits, eg. via transformation. No amount of redundant sources seem to justify extra execution and reporting though ๐Ÿ˜„

In the particular case I've seen, which is running boot-test directly after boot-cljs-test, I'm not actually even transforming these particular files - I'm just adding a new file (a "suite" namespace to run the tests) Interestingly this causes .cljc tests to run twice, but not the .clj ones.

Please see this branch if you'd like to reproduce, particularly if you think that I've made a mistake in my use of the fileset API.

Does not find any tests

I got tests in "/tests" and added that to the source-paths like indicated in the README. I also tried having test in the applications source files directly. However, boot test does not run any tests, it just exits with No namespaces were tested.

I think namespaces need to be passed as parameters in any case? How would the given example boot watch test from the README work?

If one has to provide the namespaces as params, what would be a reasonable way to do so without having to put the relevant namespaces from tests into the task definition?

Help is highly appreciated, I feel like I am missing something here, shouldn't be that hard to get this working, right?

option :junit-output-to must be of type str

What am I doing wrong here?

$ boot test -j hello
             clojure.lang.ExceptionInfo: java.lang.IllegalArgumentException: option :junit-output-to must be of type str
    data: {:file
           "/var/folders/8s/ct1psdgd7618_5_n_8clwdhm0000gn/T/boot.user5523702313645485487.clj",
           :line 29}
java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: option :junit-output-to must be of type str
     java.lang.IllegalArgumentException: option :junit-output-to must be of type str
        adzerk.boot-test/eval189/fn                     boot_test.clj:  118
                                ...
    clojure.core/apply/invokeStatic                          core.clj:  646
                 clojure.core/apply                          core.clj:  641
            boot.user/eval672/fn/fn  boot.user5523702313645485487.clj:   25
                                ...
    clojure.core/apply/invokeStatic                          core.clj:  646
                 clojure.core/apply                          core.clj:  641
          boot.core/construct-tasks                          core.clj:  760
                                ...
    clojure.core/apply/invokeStatic                          core.clj:  646
                 clojure.core/apply                          core.clj:  641
                  boot.core/boot/fn                          core.clj:  805
clojure.core/binding-conveyor-fn/fn                          core.clj: 1938
                                ...

Using BOOT_VERSION=2.6.0 and

    [adzerk/boot-test "1.1.1" :scope "test"]
    [adzerk/boot-reload "0.4.8" :scope "test"]

Providing via task-options! work w/o problems

(task-options!
    test {:junit-output-to "hello"})

selectors for tests

From lein help test:

Run the project's tests.

Marking deftest or ns forms with metadata allows you to pick selectors to
specify a subset of your test suite to run:

   (deftest ^:integration network-heavy-test
     (is (= [1 2 3] (:numbers (network-operation)))))

Write the selectors in project.clj:

   :test-selectors {:default (complement :integration)
                    :integration :integration}

Arguments to this task will be considered test selectors if they are keywords,
otherwise arguments must be test namespaces or files to run. With no
arguments the :default test selector is used if present, otherwise all
tests are run. Test selector arguments must come after the list of namespaces.

A default :only test-selector is available to run select tests. For example,
`lein test :only leiningen.test.test/test-default-selector` only runs the
specified test. A default :all test-selector is available to run all tests.

Arguments: ([& tests])

It would be nice to have a :test-selector parameter in boot test as well (it allows filtering namespaces at the moment, but not individual tests as far as I know).

Support dependency scopes

How difficult would it be to make it support dependency scopes, so it could be called

(boot (test :include-scope #{"test"}))

Not sure how to use filter option

Instructions aren't very clear (at least to me they aren't). Could someone provide a working example? I think it would be really helpful for an example that will only run namespaces with "test" in the name:

$ boot test -f "(re-matcher #\".*test.*\" %)"

I couldn't get the above to work. I tried a bunch of different things to no avail:

$ boot test -f "(re-matcher #\".*utility-belt\$\" (str %))"

The above ran all my tests; I was just trying to run the test.utility-belt namespace

$ boot test -f "(= \"test.utility-belt\" %)"

This started the test runner, printed out the names of all of my namespaces, and ran zero tests.

Support regexes for `:include` and `:exclude` terms

This is just a standardization / ergonomics drive, as - f already gives all the power.

It would be sweet if symbols still work as literals also. Rather than trying to disambiguate type from CLI, regexes could always need to match the entire namespaces, so there's no real change in usage unless you use wildcards, etc

Not sure if the namespaces should be whittled down from the fileset (like boot-expectations), or dynamically (perhaps piggybacking on the - f machinery).

There's a trade-off here - dynamic is more correct, static would be faster (less unnecessary namespaces loaded in the first place)

using boot-test within a TDD workflow

I'm trying to use boot-test (and boot-cljs-test) within a TDD workflow.

In this kind of workflow, you don't know in advance (i.e. when you launch the boot command) what are the namespaces containing the unit tests and their assertions. So it could be a good choice to not use the -n option to statically choose the test namespaces.

That said if you do not use that option, boot-test uses as test namespaces all the namespaces found in the project. In a TDD workflow, the time it takes to run tests in all namespace not containing tests is very long when compared with the time it takes to run the only namespaces actually containing tests.

In my understanding, even the -e option to exclude some namespace from testing it does not help, because it is defined at launch time, not a runtime.

Even if I understand that the choice to test all namespaces when you do not specify the -n option has to do with the fact that you could deftest in the same namespace you want to test, it could be very useful to have an option to set one or more dirs (e.g. test/clj) where boot-test has to find at runtime the namespaces containing the tests to be run.

This way, as soon as I add a new test namespace in that dir, the tests are executed without any delay caused by running the tests on all the namespaces of the project.

Does it make sense to you?

Separate test source path

Hey, it would be nice to have a separate config for test sources instead of using the same application source path. Like lein :test-paths

thanks

Option to stop on first failure

When I'm doing a fairly substantial refactoring, I find it useful to make the tests stop after the first failure, as odds are they're going to be mostly failing in about the same way, and so telling me that they're all broken is less useful. I can't however figure out how to make boot-test do this...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.