pgrange / bash_unit Goto Github PK
View Code? Open in Web Editor NEWbash unit testing enterprise edition framework for professionals
License: GNU General Public License v3.0
bash unit testing enterprise edition framework for professionals
License: GNU General Public License v3.0
First of all, thank you for this awesome framework! πββοΈ
I would like to point out that fake
ing some of the coreutils commands may result in output missing.
I was testing a bash script but I didn't want it to use the file system (at least not in all cases), so I faked some of the standard utils (within the test case) like this:
fake cat :
fake sed :
And it broke the output for that test entirely (it didn't show up). After looking at the code that started the assert
with set -x
enabled my guess is that not all commands worked as intended afterwards. fake
seemed so effective that even the assert didn't use the correct binaries. Here is a simple test case that reproduces the bug:
test_case_fake(){
fake cat :
fake sed :
assert "true"
}
You will not see even the mention of this test case.
I am not familiar with Bash that much nor with the code of the framework, but maybe we could alias the binaries that are used within the assert
so they are not mistakenly fake
d within the test case? Or just warn in the docs that the fake
cannot be used with all commands?
Cheers,
Jakub
This line:
find $tmp_dir -type f -name "bash_unit" | xargs cp -t $current_working_dir
Doesn't work because on MacOS cp
doesn't support -t
:
SYNOPSIS
cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file target_file
cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file ... target_directory
Why not just use the --exec
option with find?
PR is on the way.
It could be useful to be able to write pending tests.
Use cases:
Example of usage:
# written has
test_we_need_to_think_about() {
pending "This needs to be implemented"
}
test_is_not_working() {
assert "./script/going/wrong"
pending "Investigating"
}
# displayed has
Running test_we_need_to_think_about... PENDING β
This needs to be implemented
Running test_is_not_working... PENDING β
Investigating
Add a getting started sample code.
I've just discovered bash_unit and it's great!
However, I've a problem: how to manage stateful variables, which keep their value across tests?
For instance, this isn't working:
echo "Code outside functions"
# Neither this works: export CTR=1
function setup_suite () {
echo "All tests begin here"
CTR=1
}
function teardown_suite () {
echo "That's all, Folks!"
}
function setup () {
CTR=$(($CTR+1))
echo "--- Test Count $CTR"
}
function test_ctr () {
echo "CTR is $CTR"
assert_equals 2 $CTR "Wrong test counter!"
}
function test_ctr_1 () {
echo "CTR is $CTR"
assert_equals 3 $CTR "Wrong test counter!"
}
Results are:
$ bash_unit test_globals.sh
Running tests in test_globals.sh
Code outside functions
All tests begin here
--- Test Count 2
Running test_ctr... CTR is 2
SUCCESS β
--- Test Count 2
Running test_ctr_1... CTR is 2
FAILURE β
Wrong test counter!
expected [3] but was [2]
test_globals.sh:25:test_ctr_1()
That's all, Folks!
$
What I would like to do is to see that CTR
counts the tests. I know interfering tests are bad practice, but there might be cases like this (keeping a counter of done things, keeping a timestamp of the last operation, etc) and it would be good to have them working, somehow (some set_env
function managed by bash_unit?). The only way I've found so far is to use temp files to store the values of variables like CTR, which is not exactly practical.
Note that this can be done shunit2 (but I prefer bash_unit, mainly cause it supports TAP format).
# test_success_return_1.sh
function test_success_return_1() {
assert true
}
$ ./bash_unit test_success_return_1.sh ; echo $?
Running tests in test_success_return_1.sh
Running test_success_return_1... SUCCESS
**1**
A workaround is to add a teardown function.
I'm testing a timestamp that's been inserted in the db, but don't have a good way to "freeze time". So, I need to insert the record, get the current epoch time (presumably from date
) then compare the two, but the second may have changed between the creation and the acquisition of the new date.
So, i want the ability to assert within a delta.
There are, one presumes, many cases where you know a good approximation of a correct value but not the exact value. Just like the need for regexp when you don't know the exact string that will come back, but do know its form.
assert_within_delta <expected num> <actual num> <max delta>
Given an expectation of 5 and a delta of 2 this would match 3, 4, 5, 6, and 7
There should probably be a positive
and negative
variant.
assert_within_positive_delta <expected_num> <actual_num> <max_delta>
Given an expectation of 5 and a delta of 2 this would match 5, 6, and 7.
assert_within_negative_delta <expected_num> <actual_num> <max_delta>
Given an expectation of 5 and a delta of 2 this would match 3, 4, and 5.
cat >test_wont_fail <<EOF
test_should_fail() {
assert false
assert true
}
EOF
What is see:
./bash_unit test_wont_fail
Running tests in test_wont_fail
Running test_should_fail... FAILURE
test_wont_fail:2:test_should_fail()
SUCCESS
echo $?
0
When I want is:
./bash_unit test_wont_fail
Running tests in test_wont_fail
Running test_should_fail... FAILURE
test_wont_fail:2:test_should_fail()
echo $?
1
Is it possible for a new version to be released, based on the latest changes?
Hi!
Is it possible to add a mechanism (how ever it looks like) to bash_utils to skip tests if condition is not fulfilled?
Maybe something like this:
skip_if "test_name" "condition_function"
If condition_function
will return not successful, then the test_name
will be skipped and mark as skipped.
Reason why I need this feature:
I have tests that are dependent on system environment. If I run the tests in prepared system environment all things are fine,
but if somebody else want to run the tests in his system environment I want to skip critical tests to avoid false positives.
When for some reason setup_suite fails, bash_unit stops and teardown_suite is never played. Which is unfortunate, because I expect teardown_suite to clean-up things in order to launch the tests again.
The TAP plan is missing at the beginning of the output
See: http://testanything.org/tap-specification.html#the-plan
If you agree with that I can work on a PR.
I need this because the jenkins TAP plugin generate an exception and cant use the report
I implemented locally this one. This may help other people doing RE matching tests
assert_matches() {
local expected=$1
local actual=$2
local message=${3:-}
[[ -z $message ]] || message="$message\n"
if [[ ! "${actual}" =~ ${expected} ]]; then
fail "$message expected regex [$expected] to match [$actual]"
fi
}
test_admin_can_execute_sudo_without_password(){
run_ansible
assert "grep 'admin ALL=(ALL) NOPASSWD: ALL' /etc/sudoers.d/admin" "admin is not sudoer"
}
optional message in assert method should print stderr on a new line.
admin is not sudoer grep: /etc/sudoers.d/admin: No such file or directory
instead of
admin is not sudoergrep: /etc/sudoers.d/admin: No such file or directory
I would like to not run all tests of a test file but only specific ones.
./bash_unit test_* --case test_my_test
bash_unit [--case ]* <test_file>*
It may be nice to have a little bit of documentation...
I myself tend to use the MIT Licence, but I think that your selection of GNUv3 makes this unusable in an enterprise environment. Can you reassure me?
K.
bash_unit currently outputs in its own human-readable format. It would be great if it could output in TAP http://testanything.org/ (a machine-readable but also reasonably human-readable test result format used by Perl, GLib, etc.), either as an option or all the time, so that "larger" testing systems like GNOME installed-tests or perl's prove
tool could test bash_unit scripts alongside ELF binaries, Perl scripts and so on.
The usual convention is for TAP test scripts to be an executable that you run, outputting machine-readable results on stdout. If bash_unit had TAP output, I think this would work if you write the #!
line like this:
#!/usr/bin/env bash_unit
test_some_stuff () {
fail "doesn't work"
}
I would like to see quiet
mode so i see only where test name + status
, not detailed results
When writing a test that needs to access some file, for instance the script file under test, it is hard to figure out a path that just works, wherever the test is run from.
Currently we can only rely on ``dirname $0Μ
but this is the path of bash_unit itself.
It would be nice if bash_unit just changes the current working directory to math the directory of the currently running test file. This way all paths may be written relative to the test file directory.
Propose a way to directly test output in test
There is not so much documentation about setup, teardown and todo tests.
Hi,
There is actually no tag for current master branch version. It is said to be v1.8.0 in bash_unit itself but there is no such tag in your repo, therefore no release.
I maintain bash_unit package for Archlinux (on AUR) and can not provide this new version as I rely on tags to have matching versions between bash_unit and package.
Could you please create a tag for this version ?
Thanks
pascal@pportable:~/bash_unit$ ./bash_unit tests/test_doc.sh
Running tests in tests/test_doc.sh
Running test_block_1... SUCCESS β
Running test_block_10... SUCCESS β
Running test_block_11... SUCCESS β
Running test_block_12... SUCCESS β
Running test_block_13... SUCCESS β
Running test_block_14... SUCCESS β
Running test_block_15... SUCCESS β
Running test_block_16... SUCCESS β
Running test_block_17... FAILURE β
out> --- /tmp/19390/expected_output17 2018-10-19 08:40:34.894673983 +0200
out> +++ /tmp/19390/test_output17 2018-10-19 08:40:34.886673909 +0200
out> @@ -1 +1 @@
out> -bash: line 1: _ps: command not found
out> +environment: line 1: _ps: command not found
test_doc.sh:29:test_block_17()
Running test_block_18... SUCCESS β
Running test_block_19... SUCCESS β
Running test_block_2... SUCCESS β
Running test_block_20... SUCCESS β
Running test_block_21... SUCCESS β
Running test_block_22... SUCCESS β
Running test_block_23... SUCCESS β
Running test_block_3... SUCCESS β
Running test_block_4... SUCCESS β
Running test_block_5... SUCCESS β
Running test_block_6... SUCCESS β
Running test_block_7... SUCCESS β
Running test_block_8... SUCCESS β
Running test_block_9... SUCCESS β
Great work on bash_unit! I've been using it for about six months and have made a few minor tweaks. Patch attached for your consideration.
Summary of changes:
Hello!
First of all, thank you for creating bash_unit! I'm looking forward to learning how to use it. Presently I am getting the above error when I just try to run any of the tests included in the repo. Maybe I am doing something wrong? I am on a Windows 10 laptop running Debian on WSL so may that be the issue?
Regardless of whether I try to run one of the suites, test_core.sh
or all of the suites under tests/
, I get the same error message.
Any advice would be appreciated. Thank you!
I am trying to write a test that makes the following assertions:
run
calls function fetch_data
with the expected argument. Function fetch_data
is replaced using fake
.run
is as expected.Below is the script:
#!/usr/bin/env bash
fetch_data() {
cat "$1"
}
run() {
data=$(fetch_data "$1")
echo "data: >>>${data}<<<" >&2
amount=$(echo "$data" | jq length)
if [ "$amount" -gt 5 ]; then
echo "foo"
else
echo "bar"
fi
}
test_assertion_in_fake_affects_output() {
function _fetch_data() {
assert_equals "sample.json" "${FAKE_PARAMS[0]}" "Given wrong argument"
echo '[]'
}
export -f _fetch_data
fake fetch_data _fetch_data
assert_equals "bar" "$(run "wrong.json")" "Got wrong result"
}
When running bash_unit
I get the following output:
Β± bash_unit issue.test.sh
Running tests in issue.test.sh
Running test_assertion_in_fake_affects_output ... data: >>>FAILURE
Given wrong argument
expected [sample.json] but was [wrong.json]
issue.test.sh:20:_fetch_data()
issue.test.sh:8:run()
issue.test.sh:26:test_assertion_in_fake_affects_output()<<<
parse error: Invalid numeric literal at line 2, column 0
issue.test.sh: line 11: [: : integer expression expected
SUCCESS β
Overall result: SUCCESS β
For debugging purposes, the run
function prints the returned data. As can be seen in the output above, it contains the output of the assertion inside the fake.
After the assertion fails, the script continues:
jq length
returns an empty[ "$amount" -gt 5 ]
fails (integer expression expected
)bash_unit
marks the test a successI tried to make the script more strict by adding
set -o errexit
shopt -s inherit_errexit
but this didn't help.
I also tried putting the echo
inside the fake function above the assertion and writing the output of the assertion to stderr:
function _fetch_data() {
echo '[]' # Above the assertion
assert_equals "sample.json" "${FAKE_PARAMS[0]}" "Given wrong argument" >&2 # Write to stderr
}
This gives better results:
Β± bash_unit issue.test.sh
Running tests in issue.test.sh
Running test_assertion_in_fake_affects_output ... FAILURE β
Given wrong argument
expected [sample.json] but was [wrong.json]
issue.test.sh:26:_fetch_data()
issue.test.sh:13:run()
issue.test.sh:31:test_assertion_in_fake_affects_output()
data: >>>[]<<<
SUCCESS β
Overall result: SUCCESS β
The echo
above the assertion ensures the fake returns valid output. Writing the output of assert_equals
to stderr ensures it is not part of the data
variable. Also, bash_unit
properly prints the failed assertion of the given argument. However, in the end the test still is a success and bash_unit
exits with status code 0.
Is it possible for a test to fail immediately after an assertion inside a fake fails?
I am using bash_unit
version v2.1.0
.
When using quotes inside here doc in the context of fake, the quote will disappear from the output of the faked function. This is different from what is expected from using here doc.
Here is an exemple:
test_should_succeed_but_will_fail() {
fake toto <<EOF
"test"
EOF
assert 'toto | grep \"' 'should have found quote but seems like fake swallowed it'
}
test_will_succeed() {
fake toto <<EOF
\"test\"
EOF
assert 'toto | grep \"'
}
Commands which contain hyphens, e.g. ssh-add
, cannot be mocked with fake()
because this substitutes the command with a mock function, and apparently function names are not allowed to contain this character.
Should be possible to have a setup and teardown functions associated to each test function?
AFAIK actually you have a setup function that is executed previously to each test and a teardown function executed after each test, but the setup and teardown functions are common to all tests.
This way you have to include the cleanup (or setup) for all tests in the same function even when certain cleanup (or setup) is not needed in certain tests.
As an example, supose I have a two tests, one require to create a file and then delete it, the other doesn't require anything:
test_test1 () {
assert_equals 1 1
}
test_test2 () {
F=$(mktemp -d)
assert_status_code 0 "test -f $F"
rm $F
}
the problem with this pattern is if exit code of assertion is not 0 it never executes the cleanup (here, the rm $F) .
The right way to do this is using setup and teardown functions, but since those functions are common for all tests you end up setting or cleaning things not needed for certain tests:
setup () {
export F=$(mktemp -d)
}
teardown () {
[ -f $F ] && rm $F
}
but since setup is executed for all tests it will create a file also for test 1 which is not needed and teardown will delete a file not needed to be created neither deleted.
This way I'm forced to take into account all cases in setup and teardown, for example if I add another test checking for directoy existence I would need a new variable and know in the test which one to use:
setup () {
export F=$(mktemp)
export D=$(mktemp -d)
}
teardown () {
[ -f $F ] && rm $F
[ -d $D ] && rm -rf $D
}
test_test1 () {
assert_equals 1 1
}
test_test2 () {
assert_status_code 0 "test -f $F"
}
test_test3 () {
assert_status_code 0 "test -d $D"
}
And everyting gets bloated quickly.
This would be solved if we have a setup and teardow funcion for each test function, maybe using a pattern such as "setup_$testname" and "teardown_$testname" for example:
setup_test2 () {
export F=$(mktemp)
}
setup_test3 () {
export F=$(mktemp -d)
}
teardown_test2 () {
[ -f $F ] && rm $F
}
teardown_test3 () {
[ -f $F ] && rm -rf $F
}
test_test1 () {
assert_equals 1 1
}
test_test2 () {
assert_status_code 0 "test -f $F"
}
test_test3 () {
assert_status_code 0 "test -d $F"
}
In short what I asking for is setup and teardown functions associated to each individual test, similar to @beforeeach and @AfterEach in jUnit ( current setup and teardown would be similar to @BeforeAll and @afterall in jUnit )
The global variable FAKE_PARAMS store all the parameters as a single string so you don't know how many parameters are passed to the original command/function if one of the parameters has spaces
A user reporting:
I recently ran into some compatibility issues when trying to apply bash_unit on a system that only runs Bash 3 (on a SuSE 11 SP 4). I found the tagged revision v1.1.0 (without the Unicode stuff with \u prints, not supported before Bash 4) a good starting point. The behaviour of βset -eβ was changed to Bash 4, causing tests to abruptly die on Bash 3 if a subshell exited with non-zero. I added a check for Bash version before using βset -e', and with this change bash_unit seems to run without problems also on Bash 3.
We could at least document that bash_unit currently only have support for >= Bash 4.
Quels sont les avantages de cette bibliothèque par rapport à l'excellent https://github.com/arpinum/shebang_unit ?
:D
$ cat test_chelou
test_version_installed() {
version_found=0
[ -x /opt/logstash/bin/logstash ] && version_found=1 && assert_equals "$(/opt/logstash/bin/logstash --version)" "logstash 2.3.4"
[ -x /usr/share/logstash/bin/logstash ] && version_found=1 && assert_equals "$(/usr/share/logstash/bin/logstash --version)" "logstash 5.6.0"
[ "$version_found" -eq 0 ] && fail "no version found"
}
Logstash not installed, works as expected
$ ./bash_unit test_chelou
Running tests in test_chelou
Running test_version_installed... FAILURE β
no version found
test_chelou:5:test_version_installed()
Fake logstash
$ sudo mkdir -p /usr/share/logstash/bin/
$ sudo bash -c 'echo "echo logstash 5.6.0" > /usr/share/logstash/bin/logstash'
$ sudo chmod +x /usr/share/logstash/bin/logstash
$ ./bash_unit test_chelou
Running tests in test_chelou
Running test_version_installed...
Expected result
$ ./bash_unit test_chelou
Running tests in test_chelou
Running test_version_installed... SUCCESS
I am running bash_unit with travis-ci and I inadvertently introduced a syntax error into one of my test scripts. The script failed to run but the build succeeded. The failure should have been caught.
In Travis:
script:
- make test
- ./bash_unit tests/test_*
Logs:
$ ./bash_unit tests/test_*
Running tests in tests/test_wait_for_job_success
test_wait_for_job_success: line 22: syntax error near unexpected token 'done'
test_wait_for_job_success: line 22: 'done'
The command "./bash_unit tests/test_*" exited with 0.
What is the licence of bash_unit? Can it be used in a company context without being forced to publish source code (of the tests or the scripts under test)?
Now that work is in progress to package bash_unit, it needs its corresponding man page.
Just interesting it possible with bash or not ... π
Hi!
I noticed a small glitch when I use set -e
in my code. For example:
#!/usr/bin/env bash
set -euo pipefail
function hello() {
echo "something" | grep "else" # this will exit at this point, because this line finishes with a non-zero exit status
echo "hello"
}
# Make sure that I can use source in the test
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
hello
fi
I have the following test:
function test_1() {
...
}
function test_fail_due_to_e() {
source hello.sh
result=$(hello)
assert_equals 0 $?
assert_equals "hello" "${result}"
}
function test_3() {
...
}
Executing this tests fails with an error code, which is expected, but the report is a bit scrambled as there's no status indicator for the test that fails due to -e
:
Running test_1... SUCCESS
Running test_fail_due_to_e... Running test_3... SUCCESS
Also, if I extract the sourcing from the test_fail_due_to_e
function, like:
source hello.sh
function test_1() {
...
}
function test_fail_due_to_e() {
result=$(hello)
assert_equals 0 $?
assert_equals "hello" "${result}"
}
function test_3() {
...
}
Then the whole test file will be missing from the report.
Would it be possible to include such failures in the report?
Thanks,
David
If you want to say something nice about bash_unit but can not think of a better place right now, just add a comment to this issue.
If you want to blame bash_unit for any reason, just open another issue to describe your... issue.
in readme.adoc :
assert_fail
assert_fail [message]
Asserts that assertion fails. This is the opposite of assert.
assertion fails if its evaluation returns a status code different from 0.
it's more like this :
assertion fails if its evaluation returns a status code egal to 0.
You're confirming that behavior?
From apt update
:
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://pgrange.github.io/bash-unit_deb/debian unstable/ Release: The following signatures were invalid: EXPKEYSIG 5EB2C3CEC9B473E3 bash-unit (bash-unit signing key) <[email protected]>
W: Failed to fetch https://pgrange.github.io/bash-unit_deb/debian/unstable/Release.gpg The following signatures were invalid: EXPKEYSIG 5EB2C3CEC9B473E3 bash-unit (bash-unit signing key) <[email protected]>
And from reading the key from https://pgrange.github.io/bash-unit_deb/keys.asc
pub rsa4096 2017-09-22 [SC]
DAE44E9F6F027E87768CBF985DDCF7B36B76543D
uid bash-unit (bash-unit signing key) <[email protected]>
sub rsa4096 2017-09-22 [E]
sub rsa4096 2017-09-25 [S] [expired: 2019-10-09]
All releases of bash_unit (till 1.3.0) display a version snapshot instead of an appropriate version number.
bash_unit-1.3.0$ ./bash_unit -v
bash_unit snapshot
Assert functions name should either be infinitive+infinitive (ex : assert_fail) or noun+conjugation (ex: assert_fails). It is mixed (assert_fail, but assert_equals) and leads to errors. As we can not set for one or the other in order to preserve backwards compatibility, I propose to have both.
sudo apt-get update
...
Ign:9 https://pgrange.github.io/bash-unit_deb/debian unstable/ InRelease
...
Reading package lists... Done
W: https://pgrange.github.io/bash-unit_deb/debian/unstable/Release.gpg: Signature by key 04F0EDE844608BF21561DEC95EB2C3CEC9B473E3 uses weak digest algorithm (SHA1)
Not sure whether this is a bug of a misuse (aka "feature") :
let's consider a function returning a string having space(s) : "hello world".
I'd like to test this function and make sure I actually receive the full function output.
However, what's received by the testing facility is the string up to the space character ("hello"), so the test fails.
You can reproduce this with
./bash_unit tests/test_spaces.sh
from https://github.com/Httqm/bash_unit/tree/spaces
May be we should test a test framework
Due to the exit 1
in the fail()
function, tests are aborted when an assertion fails. In such a case, the teardown()
function is not called which potentially leaves a mess behind. I would expect teardown()
to be always called, independently of the test success.
I made a quick try to change exit 1
to return 1
but that did not work, so I did not investigate further for the moment.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.