Coder Social home page Coder Social logo

factcheck.jl's People

Contributors

andreasnoack avatar aviks avatar bicycle1885 avatar catawbasam avatar greenflash1357 avatar hayd avatar iainnz avatar ivarne avatar jakebolewski avatar jiahao avatar michaelhatherly avatar pratapvardhan avatar rened avatar shashi avatar skumagai avatar stevengj avatar tbreloff avatar tkelman avatar tomasaschan avatar vchuravy avatar yfractal avatar yuyichao avatar zachallaun avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

factcheck.jl's Issues

Info about upcoming removal of packages in the General registry

As described in https://discourse.julialang.org/t/ann-plans-for-removing-packages-that-do-not-yet-support-1-0-from-the-general-registry/ we are planning on removing packages that do not support 1.0 from the General registry. This package has been detected to not support 1.0 and is thus slated to be removed. The removal of packages from the registry will happen approximately a month after this issue is open.

To transition to the new Pkg system using Project.toml, see https://github.com/JuliaRegistries/Registrator.jl#transitioning-from-require-to-projecttoml.
To then tag a new version of the package, see https://github.com/JuliaRegistries/Registrator.jl#via-the-github-app.

If you believe this package has erroneously been detected as not supporting 1.0 or have any other questions, don't hesitate to discuss it here or in the thread linked at the top of this post.

ERROR: Incorrect usage of @fact:

I just noticed that FactCheck.jl is behaving oddly in julia 0.4.0-dev+6291 (commit fccc141), though probably already before (yesterday?).

A minimal example is the following:

julia> using FactCheck

julia> 0 == 0
true

julia> @fact 0 == 0 => true
ERROR: Incorrect usage of @fact: 0 == 0 => true

Has something changed, or needs to be adapted, so tests pass with julia 0.4?

For completeness:

julia> versioninfo()
Julia Version 0.4.0-dev+6291
Commit fccc141* (2015-07-27 20:43 UTC)
Platform Info:
  System: Darwin (x86_64-apple-darwin13.4.0)
  CPU: Intel(R) Core(TM) i7-4750HQ CPU @ 2.00GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

Does not play nice when run from a subshell

I'm using Linux with Julia 0.3.7 and I wanted to quickly set up inotify-tools to watch my project and run the test sweet when I saved any files. The quickest script to write was to run the tests from a subshell. The minimal version would look like this run.sh file:

#!/usr/bin/bash
$($@)

If I use Base.Test then I get output as expected from base_test.jl:

module Tests
using Base.Test
@test 1 == 2
end

Used like bash run.sh julia base_test.jl will print the error message.

However fact_check.jl

module Tests
using FactCheck
facts() do
    @fact 1 => 2
end
end

bash run.sh julia fact_check.jl will print nothing. If run.sh has an echo instead:

#!/usr/bin/bash
echo $($@)

then I get the output but it is not formatted.

Annoying `Expr` display (:top expressions)

This should be solved by the solution to #8.

Basically, quoted code at the top level of a file are automatically converted to :top expressions. This is super annoying when trying to read the output of failed tests that deal with expressions, since you have all this extra crap to look at instead of pretty-printed expressions.

Have a goal of having way to override the output of Base.print(io::IO, suite::TestSuite)

We have a need to write out out results in nunit format for reporting purposes. I could overwrite the Base.print function and hide the current function, but I would rather reach out to see if there are plans to include this in the architecture of the project or if you would be willing to add a hook for this behavior as in:

function Base.print(io::IO, suite::TestSuite, writer:: TestSuiteResultsWriter = nothing)

   if writer == nothing
      writer = DefaultTestSuiteResultsWriter()
   end

   writer.write(io, suite)
end

This would mean that as a part of the facts function we would need a way of providing the writer, which could be an argument following the description.

Documentation for running tests

Maybe improve the README to show users how to actually run tests. I suspect the workflow you have in mind is along the lines of:

$ vi mytest.jl

# add testing code to file

julia> reload("mytest.jl")

# glorious output of results

More compact test output

The current output looks like:

Array playback

4 facts verified.


AudioNode Stopping

1 fact verified.


WAV file write/read

3 facts verified.


Audio Device Listing

1 fact verified.

Which is starting to take up a substantial amount of vertical real estate. How would you feel about:

4 facts verified: Array playback
1 fact verified: AudioNode Stopping
3 facts verified: WAV file write/read
1 fact verified: Audio Device Listing

With the same color highlighting as above.

Erroring in facts block when not wrapped with @fact

Consider a case like this:

foo() = 1 + "0"
facts("Testing") do
    foo()
    @fact 1 => 1
    @fact 2 => 3
    @fact 3 => 3
    @fact 4 => 4
    @fact 5 => 5
end

This will still throw an error since foo() will throw an error and the remaining lines won't execute. My main use case for FactCheck is that tests keep running even if a single line fails. Would you consider the best way to go about this being to do something like @fact foo() => anything or would it make sense to change the implementation of FactCheck such that if there is an error in the facts block, other facts blocks continue running? Thanks!

Lists of Facts?

My tests look like

facts("parsing") do
  for (a, b) in [("1", 1), ("0", 0)] do
    @fact int(a) => b
  end
end

Which seems a bit wordy. It seems like with the strategic placement of a few ... and a map, one might be able to just say

facts("parsing", [("1", 1), ("0", 0)]) do (a, b)
  @fact int(a) => b
end

Not the most profound savings in the world, but lists of testable facts feature prominently in most test libraries I've seen. The win would come from creating automatic names for the individual facts thereby created, e.g. making an implicit context inside the facts function using show on the given tuples, making error messages nicer. This would be similar to NUnit's TestCase attribute. Instead of just a parsing set of facts, you'd know you had a parsing ("1", 1), parsing ("0", 0), etc.

Take this too far, and you might as well use QuickCheck, but I've found it a very useful approach in testing numeric code where you want to capture critical cases and boundary conditions.

I'd be happy to attempt a patch if the approach seems attractive.

@setup and @teardown blocks

Often a set of tests has some common setup and teardown code you want to run for each test. It would be great to be able to do something like:

module TestMyModule
@setup begin
    obj = SomeObj()
    obj.times_called = 0
end

facts() do
    # setup block runs here
    run_my_func(obj)
    @fact obj.times_called = 1
    # teardown block runs here
end

end # module TestMyModule

I think that it would make sense that @setup block defined at the module scope would be run for each facts block, and @setup defined inside a facts block would apply to each context

Adding support for "pending" facts?

Most testing frameworks have supporting for marking specs for unfinished features as pending. Perhaps this could be done by adding an optional param to the context("description", pending=true) or facts("some desc.", pending=true) methods? Or perhaps adding an @pending macro?

ERROR: unsatisfiable package requirements detected: no feasible version could be found for package: Compat

using plots; 
using Gadfly # to plot out symbolic expression
using SymPy

LoadError: ArgumentError: Plots not found in path
while loading In[1], in expression starting on line 5

Worked last week. Currently using 0.4.3. Tried this in an attempt to remedy the problem:

Pkg.clone("https://github.com/tbreloff/Plots.jl.git")
INFO: Cloning Plots from https://github.com/tbreloff/Plots.jl.git
INFO: Computing changes...

and got

ERROR: unsatisfiable package requirements detected: no feasible version could be found for package: Compat
 in error at /Applications/Julia-0.4.5.app/Contents/Resources/julia/lib/julia/sys.dylib
 in resolve at /Applications/Julia-0.4.5.app/Contents/Resources/julia/lib/julia/sys.dylib (repeats 2 times)
 in edit at pkg/entry.jl:26
 in clone at pkg/entry.jl:168
 in clone at pkg/entry.jl:186
 in anonymous at pkg/dir.jl:31
 in cd at file.jl:22
 in cd at pkg/dir.jl:31
 in clone at pkg.jl:34

Tried also 0.4.5 but same result

Failures in FactCheck not reflected in Pkg.test()

Test failures in FactCheck do not seem to get reflected in the status of Pkg.test().
Is this a problem with FactCheck or with Pkg.test?
Sample output:

julia> Pkg.test("ValidatedNumerics")
INFO: Testing ValidatedNumerics
Consistency tests
30 facts verified.
Numeric tests
  Failure   :: (line:366) :: got [4.6415888336127781e-01, 8.8790400174260076e-01]₅₃
    @interval 0.1 0.7^(1 / 3) => Interval(0.46415888336127786,0.8879040017426009)
  Failure   :: (line:366) :: got [1.1111111111111109e-01, 1.1111111111111115e-01]₅₃
    @interval h * i => Interval(0.11111111111111105,0.1111111111111111)
Out of 32 total facts:
  Verified: 30
  Failed:   2
Trig tests
  Failure   :: (line:366) :: got [-1e+00, -9.9041036598727789e-02]₅₃
    cos(@interval 1.67 3.2) => Interval(-1.0,-0.09904103659872801)
  Failure   :: (line:366) :: got [-1.0047182299210329e+01, 5.8473854459578652e-02]₅₃
    tan(@interval 1.67 3.2) => Interval(-10.047182299210307,0.05847385445957865)
Out of 13 total facts:
  Verified: 11
  Failed:   2
Tests with rational intervals
2 facts verified.
Linear algebra with intervals tests
2 facts verified.
Interval loop tests
12 facts verified.
INFO: ValidatedNumerics tests passed
INFO: No packages to install, update or remove

Fails when testing user-defined types

The following two simple tests fail when dealing with user-defined types.
Clearly the line number given for the error is also wrong.

using FactCheck

type Hello
    a
    b
end

facts("Hello tests") do

    a = Hello(3, 4)
    @fact Hello(3, 4) => Hello(3, 4)
    @fact a => Hello(3, 4)
end

The output is

Hello tests
  Failure   :: (line:337) :: got Hello(3,4)
    Hello(3,4) => Hello(3,4)
  Failure   :: (line:337) :: got Hello(3,4)
    a => Hello(3,4)
Out of 2 total facts:
  Failed:   2
Out[1]:
delayed_handler (generic function with 4 methods)

Very slow tests in Julia 0.5.0-rc3 vs. Julia 0.4

Following test runs in 8.5 seconds in Julia 0.4.5 and some 135 seconds in Julia 0.5.0-rc3:
https://gist.github.com/ulfworsoe/3f8c34d2d1998e4e2adce09d42855f64
If I remove the @facts and run the tests outside of the facts(...) do environment, it is nearly instantaneous. If I make a loop inside the facts(...) do environment instead of repeating the lines, it is also nearly instantaneous.

I am not sure if this is a FactCheck issue or a Julia issue, but I could not come any closer to it.

Adding FactCheck package

I did a Pkg.add("FactCheck") in julia v0.4 and obtained

ERROR: unsatisfiable package requirements detected: no feasible version could be found for package: BinDeps
 in error at ./error.jl:21
 in resolve at ./pkg/resolve.jl:37
 in resolve at ./pkg/entry.jl:422
 in edit at pkg/entry.jl:26
 in anonymous at task.jl:447
 in sync_end at ./task.jl:413
 [inlined code] from task.jl:422
 in add at pkg/entry.jl:46
 in add at pkg/entry.jl:73
 in anonymous at pkg/dir.jl:31
 in cd at file.jl:22
 in cd at pkg/dir.jl:31
 in add at pkg.jl:23

@mlubin

Bump version on METADATA

v0.1.1, which is the newest version in METADATA is several months old, and doesn't work with the README (namely exitstatus() doesn't exist).

Could you please tag a new version? (Or I can if you'd prefer.)

roughly is broken

When isapprox was merged into Base, the API was changed a little (using kv-args, for example) so the roughly helpers are broken now.

I'm working on a fix, but I figured I'd file it here anyway in case someone stumbles over it and wonders what's going on.

Documentation for the "new" isapprox is here - in case I don't hear any other opinions, I'll try to make the API for roughly match as closely as possible.

incorrect usage of @fact: $(Expr(:-->, 1, 1))

Hi

I get this error (after Pkg.update()):

julia> facts("Testing basics") do
           @fact 1 --> 1
           @fact 2*2 --> 4
           @fact uppercase("foo") --> "FOO"
           @fact_throws 2^-1
           @fact_throws DomainError 2^-1
           @fact_throws DomainError 2^-1 "a nifty message"
           @fact 2*[1,2,3] --> [2,4,6]
       end
ERROR: Incorrect usage of @fact: $(Expr(:-->, 1, 1))

Move to JuliaLang?

Hi @zachallaun

This package is the most widely-used testing package in the Julia ecosystem, but it looks like you haven't had time or interest to maintain it (which is fine). I have a proposal: move it into the JuliaLang organization to it is easier for others to contribute too, and relieve you of the burden of having to look after it. Of course we could just fork it, but I'd rather it be your decision.

Thanks!

Custom failure messages

Is there a way to output a better failure message? For instance would it be possible to pass a string to @fact? Do I need my own function to do that?

this is waht I have in mind

for n in some_names 
  @fact p1[n] => p2[n] " $n is not matching "
end

where the error message is displayed in the case of failure,

thanks!

t.

swallowing stdout

When testing code that prints to stdout, the output gets interleaved with the test status output.

In at least one of the python test runners the runner swallows stdout and only displays it on a test failure. If the tests are passing you can assume the user doesn't really care what got printed.

Not sure how exactly to implement this, but I wanted to feel out whether it would be considered a desired feature.

roughly() broken for arrays on 0.3

using FactCheck

facts() do
    @fact [1.0,2.0,3.0] --> roughly([1.0,2.0,3.0])
end

gives

  Error :: (line:-1)
    Expression: [1.0,2.0,3.0] --> roughly([1.0,2.0,3.0])
    `isapprox` has no method matching isapprox(::Array{Float64,1}, ::Array{Float64,1})

This worked on the previous release of FactCheck.

julia> versioninfo()
Julia Version 0.3.12-pre+2
Commit 7709f2e* (2015-07-28 06:05 UTC)
Platform Info:
  System: Linux (x86_64-linux-gnu)
  CPU: Intel(R) Core(TM) i5-3320M CPU @ 2.60GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

CC @stevengj f3a07ae

Add `--> true` if not present

I find myself writing things like

 @fact a == b 

a lot, instead of

 @fact a == b --> true

Could we add the --> true automatically if it is not present, to make it more like Base.Test and save effort? Enabling this could be made optional.

PSA: I just rewrote a lot of the printing code

There were actually some weird glitches to do with indentation, and some other stuff I fixed along the way. Its a bit more verbose now, but much more consistent, e.g. from runtests.jl

Test error pathways
  Success :: (line:505) :: I will never be seen :: fact was true
    Expression: 1 --> 1
      Expected: 1
      Occurred: 1
  Failure :: (line:505) :: one doesn't equal two! :: fact was false
    Expression: 1 --> 2
      Expected: 1
      Occurred: 2
  Error :: (line:505) :: domains are tricky
    Expression: 2 ^ -1 --> 0.5
    DomainError:
    Cannot raise an integer x to a negative power -n. Make x a float by adding
    a zero decimal (e.g. 2.0^-n instead of 2^-n), or write 1/x^n, float(x)^-n, or (x//1)^-n
     in power_by_squaring at ./intfuncs.jl:82
     in ^ at ./intfuncs.jl:106
     in anonymous at /Users/idunning/.julia/v0.4/FactCheck/src/FactCheck.jl:271
     in do_fact at /Users/idunning/.julia/v0.4/FactCheck/src/FactCheck.jl:333
     in anonymous at /Users/idunning/.julia/v0.4/FactCheck/test/runtests.jl:271
     in facts at /Users/idunning/.julia/v0.4/FactCheck/src/FactCheck.jl:448
     in include at ./boot.jl:259
     in include_from_node1 at ./loading.jl:266
     in process_options at ./client.jl:308
     in _start at ./client.jl:411

Notice the indented backtrace, the clear marking of thats the expression, and what the results are.

Also

  > incrementing
    Failure :: (line:505) :: incrementing :: fact was false
      Expression: inc(inc(inc(0))) --> 2
        Expected: 3
        Occurred: 2
  > nonsense
    Failure :: (line:505) :: nonsense :: fact was false
      Expression: x --> y
        Expected: 5
        Occurred: 10
    Failure :: (line:505) :: strings are strings :: fact was false
      Expression: "baz" --> "bazz"
        Expected: "baz"
        Occurred: "bazz"

Where a feature of that last one is that strings will now get quotation marks, and the top and middle ones shows the difference between what you wrote, and the lhs/rhs.

Ensure a function call throws a warning?

There is already @fact_throws to check whether a function call gives an error or an exception, but no @fact_warns for warnings.

I think the syntax should be very similar to that of @fact, as a warning does not interrupt the function execution: it still returns a value, but prints something on the output.

(And I have no idea about how to implement such a thing. May be with STDERR redirection? I've yet to make it work…)

Summary stats when checking a large number of facts/files

Would be nice if there is a way to get summary stats of how many facts was verified when running multiple facts in a row as when a runtests.jl loads a number of test files with individual facts statements in them. Summary stats is common in other unit testing frameworks, e.g. Ruby's minitest etc.

Allow more test operators (<=, <, etc.)

Currently the only allowable operator in a @fact is =>. I have quite a few tests that do other boolean tests (e.g. check that @allocated is less than N bytes). Right now I defined a function "lessthan" that's a FactCheck-compatible test, but it's pretty verbose.

What I'd like is for the @fact macro to also handle other top-level boolean operators like <=, <, etc? Then I could just do:

@fact (@allocated my_func()) < 200

Are there any fundamental architecture issues that make this a bad idea?

Better formatting for failures

Given the test:

@fact 0 => 1

This is hard to read:

Failure: :(test#16(t#17) = (t#17$(DeFacto).==1)(0))

Would like to see something like:

Failure (file.jl:8) 0 == 1

Failure with endof` has no method matching endof(::Pair{Tuple{Any},Foo})

Almost certainly caused by the change to give Pairs for dictionary iteration. See this Travis log, or this snippet:

  Error   :: (line:-1) :: index set printing
    sprint(print,obj) => exp_str
MethodError: `endof` has no method matching endof(::Pair{Tuple{Any},JuMP.Variable})
 in cont_str at /home/travis/.julia/v0.4/JuMP/src/print.jl:378
 in cont_str at /home/travis/.julia/v0.4/JuMP/src/print.jl:456
 in print at /home/travis/.julia/v0.4/JuMP/src/print.jl:329
 in sprint at iostream.jl:210
 in sprint at iostream.jl:214
 in anonymous at /home/travis/.julia/v0.4/FactCheck/src/FactCheck.jl:142
 in do_fact at /home/travis/.julia/v0.4/FactCheck/src/FactCheck.jl:201
 in io_test at /home/travis/.julia/v0.4/JuMP/test/print.jl:142
 in anonymous at /home/travis/.julia/v0.4/JuMP/test/print.jl:85
 in context at /home/travis/.julia/v0.4/FactCheck/src/FactCheck.jl:341
 in anonymous at /home/travis/.julia/v0.4/JuMP/test/print.jl:67
 in facts at /home/travis/.julia/v0.4/FactCheck/src/FactCheck.jl:315
 in include at ./boot.jl:254
 in include_from_node1 at ./loading.jl:197
 in include at ./boot.jl:254
 in include_from_node1 at ./loading.jl:197
 in process_options at ./client.jl:308
 in _start at ./client.jl:411

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.