Coder Social home page Coder Social logo

alcotest's Introduction

MirageOS logo
Build Unikernels in OCaml

OCaml-CI Build Status docs


MirageOS is a library operating system that constructs secure, performant and resource-efficient unikernels.

About

MirageOS is a library operating system that constructs unikernels for secure, high-performance network applications across various cloud computing and mobile platforms. Developers can write code on a traditional OS such as Linux or macOS. They can then compile their code into a fully-standalone, specialised unikernel that runs under the Xen or KVM hypervisors and lightweight hypervisors like FreeBSD's BHyve, OpenBSD's VMM. These unikernels can deploy on public clouds, like Amazon's Elastic Compute Cloud and Google Compute Engine, or private deployments.

The most up-to-date documentation can be found at the homepage. The site is a self-hosted unikernel. Simpler skeleton applications are also available online. MirageOS unikernels repositories are also available here or there.

This repository

This repository contains the mirage command-line tool to create and deploy applications with MirageOS. This tool wraps the specialised configuration and build steps required to build MirageOS on all the supported targets.

Local install

You will need the following:

  • a working OCaml compiler (4.08.0 or higher).
  • the Opam source package manager (2.1.0 or higher).
  • an x86_64 or armel Linux host to compile Xen kernels, or FreeBSD, OpenBSD or MacOS X for the solo5 and userlevel versions.

Then run:

$ opam install mirage
$ mirage --version

This should display at least version 4.0.0.

Using mirage

There are multiple stages to using mirage:

  • write config.ml to describe the components of your applications;
  • call mirage configure to generate the necessary code and metadata;
  • optionally call make depends to install external dependencies and download Opam packages in the current dune workspace.
  • call dune build to build a unikernel.

You can find documentation, walkthroughs and tutorials over on the MirageOS website. The install instructions are a good place to begin!

alcotest's People

Contributors

aantron avatar andersfugmann avatar apeschar avatar avsm avatar craigfe avatar dinosaure avatar dsheets avatar edwintorok avatar gpetiot avatar gs0510 avatar hannesm avatar julow avatar kkirstein avatar linse avatar mefyl avatar misterda avatar mjambon avatar nathanreb avatar pagingmatt avatar pqwy avatar psafont avatar rgrinberg avatar rizo avatar samoht avatar seliopou avatar talex5 avatar thelortex avatar tmcgilchrist avatar yallop avatar zjhmale avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

alcotest's Issues

Example of testing record output

Is there any example on how to test record values with Alcotest?

For example testing something in the following way:
I'm using ReasonML and this is what I wrote

let parsePath0 () =>
  Alcotest.(check Path.pathObject)
  "Path.parse 0"
  (Path.(
    { 
      dir: Some "/home/user", 
      root: Some "/", 
      base: Some "dir", 
      name: Some "dir", 
      ext: None 
    })) (Path.parse "/home/user/dir");

which converts to the following in OCaml:

let parsePath0 () =
  (let open Alcotest in check Path.pathObject) "Path.parse 0"
    (let open Path in
       {
         dir = (Some "/home/user");
         root = (Some "/");
         base = (Some "dir");
         name = (Some "dir");
         ext = None
       }) (Path.parse "/home/user/dir")

However I get the following error:

Error: Unbound value Path.pathObject

from this line:

Alcotest.(check Path.pathObject)

pathObject is a record type and it is defined because it has the same type as the record that inside of the local open:

(Path.({ dir: Some "/home/user", root: Some "/", base: Some "dir", name: Some "dir", ext: None }))

A more quiet output

Hi,

When using alcotest, I noticed that it is very verbose when there are no failures. When there are many tests and they are run in a tight loop, it's not immediately obvious if there are failures or not.

Ideally, what I'm after is a mode where only failures are displayed (or something very simple like one dot per passing test case).

I'm happy to provide a PR to add a flag if this is something you agree with.

Thanks!

Errors from Lwt.async not shown

The default async exception handler prints to stderr and exits. Since alcotest redirects stderr, the user doesn't see anything except that the tests stop with ... displayed.

As a workaround, I use this in my tests:

let fd_stderr = Unix.descr_of_out_channel stderr
let real_stderr = Unix.dup fd_stderr
let () =
  let old_hook = !Lwt.async_exception_hook in
  Lwt.async_exception_hook := (fun ex ->
    Unix.dup2 real_stderr fd_stderr;
    Printf.eprintf "\nasync_exception_hook:\n%!";
    old_hook ex
  )

I'm not sure what the correct fix is, but the current situation doesn't seem ideal.

Terminal mangling issues?

Trying to get an extremely simple test case working (pared it down for this issue) but am running into issues:

let strip_chr () =
  Alcotest.(check (list string)) "removed chrs when needed"
    ["1"; "2"; "2"; "MT"; "X"; "JH584295.1"; "GL456382.1"]
    ["1"; "2"; "2"; "MT"; "X"; "JH584295.1"; "GL456382.1"]

let utility_tests = [
  "Remove chr from chromosomes if necessary", `Quick, strip_chr;
]

let () =
  Alcotest.run "Test FASTA sorting" [
    "utils", utility_tests;
  ]

Running this test gives me:

screen shot 2016-04-21 at 2 05 15 pm

You can see the text is not quite right… it gets more confusing when there is a failure, e.g.:

let strip_chr () =
  Alcotest.(check (list string)) "removed chrs when needed"
    ["2"; "2"; "MT"; "X"; "JH584295.1"; "GL456382.1"]
    ["1"; "2"; "2"; "MT"; "X"; "JH584295.1"; "GL456382.1"]

screen shot 2016-04-21 at 2 06 38 pm

opam info alcotest
             package: alcotest
             version: 0.4.8
          repository: default
        upstream-url: https://github.com/mirage/alcotest/archive/0.4.8.tar.gz
       upstream-kind: http
   upstream-checksum: e293617063cb379442d5f5b12a373b04
            homepage: https://github.com/mirage/alcotest/
         bug-reports: https://github.com/mirage/alcotest/issues/
            dev-repo: https://github.com/mirage/alcotest.git
              author: Thomas Gazagnaire
             license: ISC
             depends: ocamlfind & ocamlbuild & oasis & astring & result & cmdliner
   installed-version: 0.4.8 [4.02.3]
  available-versions: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.4.8
         description: Alcotest is a lightweight and colourful test framework.

Add failf

It would be useful to have Alcotest.failf for format-string'd error.

I'll PR if you agree. Probably just Fmt.kstrf Alcotest.fail as the implementation?

Missing comparison with other testing frameworks

It would be great if the README (or another good place in the documentation) could mention a short comparison to other common testing frameworks, such ounit. What are the advantages? What are the drawbacks? When to use alcotest, when to use ounit?

allow command-line arguments to be passed to tests?

We have tests which might generate some artifacts useful for correcting the program behavior that caused the test to fail. It would be great to be able to pass command-line arguments to the test code specifying whether these artifacts should be generated, and if so, where to store them, but currently we have no way to do this for our Alcotest tests.

Alcotest output requires verbose flag

I'm trying to use Alcotest to test some code with the following types and test function. However, unless I use the "--verbose" flag in argv when calling the Alcotest run function, I get no output, even if there are lots of test failures. Using the "--verbose" flag everything displays fine.

type test = { citation : string option
            ; caption  :  string option
            ; expected : Sexp.t option
            ; program  :  string} [@@deriving sexp]

type suite = { title : string
             ; reference : string option
             ; tests : test list } [@@deriving sexp]


(** Apply functions to the test programs in a suite *)
let test (s:suite) ~f =
  let open Alcotest in
  let sexp = testable Sexp.pp Sexp.equal in
  let test_set = List.mapi s.tests ~f:(fun i test ->
      let name = match test.caption with Some s -> s | None -> Int.to_string i in
      let exp = match test.expected with Some s -> s | None -> Sexp.Atom "" in
      let fn = fun () -> (check sexp) name exp (f test.program) in
      (name, `Quick, fn)
    ) in
  let argv = Array.of_list ["--verbose"] in
  run s.title ~argv [("Suite", test_set)]

I wonder if it's related to how I'm calling test, using Command from a separate main.ml file, where Model is the module containing the bulk of the project code:

let test =
  Command.basic_spec
    ~summary:"Run the test cases from the given file"
    Command.Spec.(
      empty
      +> anon ("filename" %: file)
      ++ common )
    ( fun filename debug verbose () ->
        let open Test in
        let suite = from_filename filename in
        let runner = (fun s -> Model.of_string s |> Model.to_sexp ) in
        test ~f:runner suite )

Review the slow/quick distinction

It could be useful to have something more powerful, with custom tags and a simple query language. Also, on simple projects nobody care about that feature so we should not make it mandatory.

"Alcotest is the only testing framework using colors!" please show a screenshot?

Hi!

Your project seems mature and very nice, it's awesome to see tools like this in OCaml!

I only have a small remark, in the subtitle and the description of the project you advertise it by saying "Alcotest is the only testing framework using colors!", and yet we don't see colors in any examples presented in the description!
Including code for examples is nice, but it doesn't show the colors.
I would suggest to include (at least) a nice screenshot showing input/output of an example and the colors!

Regards 😃.

Making Alcotest more asynchronous

alcotest-lwt currently calls Lwt_main.run for each test, in order to convert a Lwt test that is executed asynchronously into a synchronous one that completes before the function returns.

We would like to use alcotest-lwt, or similar modules, in Lwt and elsewhere, but Lwt_main.run is not available on Node.js or in browsers – there is no way to convert asynchronous code to synchronous by waiting on something.

We basically would need Alcotest to have fully-asynchronous tests, and to push Lwt_main.run to one place as far to the end of the test runner as possible. Ideally, Alcotest.run itself would return a promise.

Something similar had to be done to the Lwt tester recently, and it is what is preventing us from replacing it with Alcotest: ocsigen/lwt@5a27275#diff-fa96e02582bba62b4c7d8bac70d3e8e7.

cc @kennetpostigo

Example

It would be great to have a set of exemple to start easier.

Maybe a bunch of files in an example folder.

I never used alcotest but I thinks on something like this :

(* Define some function to test *)
(* The code *)

(* Define the tests *)
(* The code *)

(* Run the tests *)

Review how tests are registered

Few users have expressed the fact that listing all the tests is error prone (as you could easily forget to add a test if you don't have error-as-warning + an empty mli to guide you).

Maybe we expose a combinator like crowbar where tests are automatically added to a global list of things to test.

Support junit.xml format

I want to use CircleCI's statistical features, which can tell me what tests are most flaky, etc. However, this requires a junit.xml formatted output, so it would be great if alcotest supported one.

It would be useful, when/if this is implemented, if we could still get console output while writing junit.xml output to a file.

Create a fresh log sub-dir for every run

This would be useful to compare various run.

The sub-dir should be a timestamp or a uuid, and it should be displayed at the beginning of the run (so that even if an user press CTRL-C the location of the logs is still available to read)

How to execute "slow" tests?

(from mirage/mirage-tcpip#405 (comment))
with the current integration story, all I have to do is to execute dune runtest that builds and executes the tests (unless there was no modification).

but only the `Quick tests are executed by the above command. How can the `Slow tests be executed? One way is to execute the test binary itself (_build/default/test/test.exe), but @samoht suggested "you need to create a new test alias for this". What I don't understand is that when executing the test executable directly, it runs all the tests, but dune runtest only runs the `Quick ones, where is this decision/check made?

Documentation Down

It seem the documentation is down. Has it by any chance been moved elsewhere?

When running from jbuilder, Alcotest shows old results

I've noticed that when I do e.g.

jbuilder build --dev test/test.bc
./_build/default/test/test.bc test core

Alcotest often displays two errors. It looks like one is from the run I just did, and the second is from a previous run. Any idea what could cause that?

Is there any way to check None or Some?

For example, I write a function, which return the last element of list.

let rec last = function
    | [] -> None
    | [elem] -> Some elem
    | hd :: tl -> last tl

And I want to write something like this:

Alcotest.(check (option None)) "Last from empty list" None (last [])

Can I make it using Alcotest?

The time reported to run tests is wrong

It happens at least for lwt tests:

$ time _build/default/foo.exe
Testing foo.
[OK]                all          0   one.
The full test results are available in `_build/_tests`.
Test Successful in 0.001s. 1 test run.

real    0m10.024s
user    0m0.004s
sys     0m0.008s

Consider comparison functions

Being able to check comparisons, and have pretty-printing for failed cases, would be nice - something like:

Alcotest.(check bool (greater_than int) ) "verify that y > x" true y x

I can do the check with boolean comparisons like this (below), but then I won't get the values of x,y:

Alcotest.(check bool bool) "verify that y > x" true (y > x)

This could either be solved by making Alcotest.TESTABLE require a compare instead of equal (which is already used by many other things, like Set and Map), but breaks backwards compatibility, or by supplying convenience functions besides check, which could use an optional ?compare, defaulting to polymorphic comparison, and wrapping the pp.

Make alcotest compile to js

Hey, I wanted to inquire if there is any interest in making alcotest compile to js? If there is then some points to make this possible:

  • Include bs-native bsconfig.json in the repo (similar to jbuild)

  • when compiling for js you must remove cmdliner as a dep and fmt (its cli module depends on cmdliner) and include a cli written in node.js in order to print to the console. The reason for removing cmdliner is because it depends on the stdlib Arg module, this specifically I don't believe can be compiled to js.

accent change aligment

If the name of test case contain n accents, the ok, error message is moved n character to the right losing its alignment with other lines.

open Alcotest
let names = ["test de add";"test de add après ààààremove";"test de add"]
let tc name = name,  `Quick, fun () -> ()
let t_world = "tests de toto", List.map tc names
let _ = run "pipo" [t_world]

GitLab compatibility

Hello, not sure, where to ask, so created an issue here.

So, I have such output from dune for runtest on local:

Screenshot from 2019-04-18 16-00-57

And here you can see lots of colors

But here what I see on GitLab CI from same command:

Screenshot from 2019-04-18 15-58-35

As you see, terminal in general able to show ascii colors, and I force it to have TERM=linux. However, alcotest doesn't render colors there, and didn't cleanup intermediate ... lines, just duplicate them on additional final test output.

What can be wrong here, and is it a known issue, and is there any workaround (e. g. like --no-buffer -j 1 for dune to let alcotest print colors).

test name output longer than a terminal line looks strange

[OK]                RepositoryReleases               2   not authorised.
[OK]                RepositoryReleases               3   bad release.
 ...                RepositoryReleases               4   name mismatch (releases and authorisation
[OK]                RepositoryReleases               4   name mismatch (releases and authorisation
).

the 3rd line looks well formatted (cut before the line wrapping), whereas the 4th line is printed as a new line, and includes a line wrapping (for ).).

Add a more generic check_raises

The current one uses a strict equality, but in some cases you really want to pattern match on the exception to decide whether the function raised something in the right family of errors.

run not exiting

Hello,
It would be cool to be able to execute several run function or to be able to execute something after the run... For this run should not exit... At least perhaps we could have two run function, one with exits one without exits...

Test output missing line numbers in case of error

Hello,
I am learning OCaml and I am used to TDD, for me fast navigation from test output to failing test is of utmost importance. I took the simple example from the README and modified to make it fail:

let capit () =
  Alcotest.(check char) "same chars"  'X' (To_test.capit 'a')

let capit2 () =
  Alcotest.(check char) "same chars"  'X' (To_test.capit 'a')

let test_set = [
  "Capitalize" , `Quick, capit;
  "Capitalize" , `Quick, capit2;
  "Add entries", `Slow , plus ;
]

Here I duplicated on purpose capit and capit2 to better show the usability problem I see.

The test output is:

./test_simple.native
Testing My first test.
[ERROR]             test_set          0   Capitalize.
[ERROR]             test_set          1   Capitalize.
[OK]                test_set          2   Add entries.
The full test results are available in `_build/_tests`.
2 errors! in 0.000s. 3 tests run.

How am I supposed to find the failing test? By searching for the test description string "Capitalize", hoping that the string is unique?

Also if I run with the -e flag, I get:

./test_simple.native -e
Testing My first test.
[ERROR]             test_set          0   Capitalize.
[ERROR]             test_set          1   Capitalize.
[OK]                test_set          2   Add entries.
-- test_set.000 [Capitalize.] Failed --
in _build/_tests/test_set.000.output:
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ASSERT same chars
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[failure] Error same chars: expecting
X, got
A.

-- test_set.001 [Capitalize.] Failed --
in _build/_tests/test_set.001.output:
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
ASSERT same chars
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[failure] Error same chars: expecting
X, got
A.

The full test results are available in `_build/_tests`.
2 errors! in 0.001s. 3 tests run.

Here I have the same problem: Maximum I can do is to search for the string "same chars" and hope it is unique.

To make a comparison with, for example, the google test C++ Unit Test library:

  • Test failure as always the line number, so that it is very easy to integrate Editor or IDE navigation to the failing test.
  • There is no need for the test registration (what is done in Alcotest with the test_set. In general test registration is not only useless boilerplate, it is also error prone because you can forget to register a test you wrote.
  • There is no need for a textual description of the test, it is the function name that is used as test name in the output. This not only allows for better DRY, it also allows the compiler to catch duplicated names.

Or am I missing something?

Examples incorporating alcotest-lwt

I'm not exactly sure how to use this package to test lwt, I tried to look at two other repos that depend on alcotest-lwt but I couldn't quite figure out how it should be used. Is there any chance there could be some simple examples on how best to incorporate alcotest-lwt into our tests?

Tests which raise exceptions have output with -e but nothing in result file

-- Link.000 Failed --
link.
./_tests/Link.000.output:

[exception] Unix.Unix_error(Unix.ENOENT, "chdir", "link")
Raised by primitive operation at file "test.ml", line 49, characters 4-18
Re-raised at file "test.ml", line 55, characters 10-11
Called from file "test.ml", line 60, characters 17-93
Called from file "test.ml", line 73, characters 31-43
Called from file "test.ml", line 98, characters 2-77
Called from file "lib/alcotest.ml", line 246, characters 8-12

but the result file, _tests/Link.000.output is 0 bytes.

Test failures not printing expected/actual values

When I run simple assertions on built-in checkers such as int, I see output that tells me a failure occurred -- but nothing about the values involved in the failure. It looks to me like the implementation of check should be doing this, but it's not on version 0.4.2 (perhaps it has been fixed since then?).

Here's sample output:

screen shot 2015-10-29 at 8 44 44 am

It seems to me that check is throwing the exception, but somehow the message string is empty:

screen shot 2015-10-29 at 8 51 06 am

Misalignment of lines showing failed tests

With Alcotest 0.4.8, lines were always neatly aligned, regardless of whether the test had succeeded for failed. This is no longer the case with versions 0.4.9 and 0.4.10: lines containing failed tests are misaligned.

Release?

Maybe we could have a release soon with the backtrace functionality?

test output directory is confusing

since #125 / #137 was merged, the test output is located at _build/default/test/_build/_tests/UUID/<testname.txt>. This is very hard to navigate to.

In order to read the output of a failing test, I now first have to figure out the UUID of this run, which is sometimes hidden pages above the failing test case. And then I've to navigate to the directory, remembering on the next test run to switch to a different directory again.

I still don't understand the motivation to output into a randomly-generated directory, but if it is useful for others, what about creating a symbolic link when a testrun starts(!!) linking to the current testrun e.g. _build/default/test/current pointing to _build/_tests/UUID/? then there'd at least be a canonical way to read the output!?

Test case names are used directly for filesystem interactions

Hello! I'm taking Alcotest for a spin, and my very first test case happened to be en-/decoding of values.

So I wrote up:

let test_int () =
  Alcotest.(check int) "same int" 1234 (1234)

let tests =
  [
    "integer en/decoding",
      ["check that ints are good", `Slow, test_int ];
  ]

let () =
  Alcotest.run "my test suite" tests

and it gave me this error:

$ topkg te
Testing my test suite.
 ...                integer en/decoding          0   check that ints are good.alcotest_lib.native: internal error, uncaught exception:
                     Unix.Unix_error(Unix.ENOENT, "open", "_build/_tests/integer en/decoding.000.output")
                     
pkg.ml: [ERROR] Some tests failed.

This turned out to be caused by the slash in the test case name. Changing it to integer en- and decoding made it work. It was a little bit counter-intuitive to me as a new user of the library, but I guess it's a corner case - wanted to let you know either way. I like all the colors! :-)

Testing floats no longer works

In addition to the API changing in 0.8.0, it seems it is no longer possible to compare two floats exactly. Maybe have separate float and float_approx, or use <= rather than <?

output dir broken on Windows/Cygwin

On Windows/Cygwin (mingw-toolchain) I get an error when running tests with the default output dir:

test_runner.exe: internal error, uncaught exception:
                 Unix.Unix_error(12, "mkdir", "./C:\\Users\\<..>\\_build\\default\\test\\_build")

Testing Benchmark.
This run has ID `A376A064-4F4C-4A56-91A7-0DFEE54FAEB7`.

It looks like there is a mix-up between relative and absolute paths together with the (usual) Windows path separator \ issue. test_runner --help reveals also a broken path as default for -o. Calling test_runner manually with -o and a correct path works as expected.

Since 0.8.3 is working correctly, I suspect that 32690c6 has introduced this regression (most probably by calling Sys.getcwd). But instead of fixing there, it might be better to make mkdir_p function in alcotest.ml platform dependent.

Expose `assert` functions

Would be great to not rely on OUnit at all and define few assert functions. I have already redefined them anyway in most of my unit-tests, see https://github.com/mirage/irmin/blob/master/lib_test/test_common.ml#L65

I quite like the following design:

module type Testable = sig
  type t
  val pp: Format.formatter -> t -> unit
  val (=): t -> t -> bool
end

type 'a testable = (module Testable with type t = 'a)  

val assert_equal: 'a testable -> string -> 'a -> 'a -> unit
val assert_....

Tips for running tests from CI?

I'm running tests in CI and I want

  • the job to fail if any tests fail
  • the logs of the failing test (and probably the previous tests) to be easily accessible

I'm currently running my_test_binary which gives me a nice summary view. If a test fails then so does the CI job but I don't get any logs.

I tried running test_binary -ev and I do see logs but I can't tell which test generated which logs. Unfortunately many of my tests look the same :( The failures are in the output files but I can't get those because the build failed (I think its the case that a failed build means no artefact upload, at least on appveyor)

I might wrap my tests in a shell script which runs each test case separately with -ev, to keep the logs apart. I think a proper fix would be to explicitly log the name of a test when it starts (and probably when it finishes).

WDYT?

Passing the files input for tests generation

For example if you have some binaries in the test/samples directory, and need to pass that path to the test, and having code like

let mytest filename =
     (fun () -> Alcotest.(check string) "some" "string1" "string2")

let testset path =
     List.map ~f:(fun l ->
          l, `Slow, mytest l)
     (Sys.readdir path)

let get_tests_dir =
     let doc = "path to the tests directory" in
     Cmdliner.Arg.(required & opt (some string) None & info ["p"] ~doc ~docv:"PATH")

let () =
     Alcotest.run_with_args "my shiny tests" get_tests_dir [
        "testset1", testset;
     ]

But that wouldn't work, because Alcotest passes the argument to the each checker function separately in the test list. Is it possible to workaround this somehow? So every file in the test/samples would be a separate test like filename, Slow, mytest filename`.
I am going to pass that path in the dune file:

(alias
 (name   runtest)
 (action (run ./tests.exe "test/samples")))

How do I let make alcotest print oUnit failures in detail?

In oUnit, whenever there's a failure and a ~printer is provided the expected and received values will both be printed.

The most I seem to get in alcotest is this not so helpful stacktrace when i run the tests with -e:

-- Parse URI.  9 Failed --
empty.
./_tests/Parse URI.  9.output:

[exception] OUnitTest.OUnit_failure("parse_request_uri_empty unexpected request parse result")
Raised at file "src/oUnitAssert.ml", line 45, characters 8-27
Called from file "lib/alcotest.ml", line 250, characters 8-12

same_string failure is hard to read/debug

Hi,

When dealing with strings starting/ending with spaces or new lines, the output is hard to read. E.g.

[failure] Error same string: expecting    blablabla

blablabla

    , got      blablabla

blablabla

    
.

It would be great to change the output to something like:

[failure] Error same string:
expecting:
---
    blablabla

blablabla

    
---
got:
---
      blablabla

blablabla

    
---

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.