Comments (6)
Consequently, if you have a "long-running" process (where long-running means min_time / min_runs = 300 ms by default), it will always run exactly min_runs times.
That is true, but just because there is a way to do it, doesn't mean there shouldn't be a better/easier way. Doing it like this means that for someone to be able to run a benchmark exactly n time - which I would say is a very common scenario - they will have to learn a few things, and even if those things are documented, they will have to: 1. looks up documentation to learn that the default number of iterations is max(3.0/T_1, 10)
2. realize that if your program takes more than 3.0/10 seconds
than it will run exactly min-runs
3. use this knowledge to come up with a somewhat hacky way of running exactly n time by passing --min-runs <n>
.
This may be all simple logical steps to take, but it still makes it unreasonably harder than it need to be to achieve this simple, common, task.
There is also another argument: currently the default number of iterations is always max(3.0/T_1, 10)
. I would argue that in the future it would make a lot of sense to also have other stop criteria. For instance, I would really like to be able to say "please run this until the standard deviation is below x% of the average time" (or something similar to this). In that case the --runs
would not make sense, but the --max-runs
would be useful to stop the benchmark if it was not able to progress in getting the stddev below the asked value.
from hyperfine.
Released in v1.3.0.
from hyperfine.
Thank you for the feedback!
This sounds reasonable, but I would like to know what the use-cases for these options are.
from hyperfine.
I think it is a fundamental option to be able to say "run this benchmark exactly n times".
To give a concrete example: this is useful if you want to run benchmark that take a long time. For instance, recently I was benchmarking rustc by compiling itself. Each run would take ~50 minutes. I want to be able to say "run exactly 5 times" (or maybe "run at most 5 times").
from hyperfine.
So the way this currently works is the following:
- Hyperfine performs one initial benchmark run (say the time was
T_1
) - Hyperfine computes
runs_in_min_time = min_time / T_1
to determine how many runs would be performed inmin_time
(which is currently hard-coded to 3 seconds). - The number of runs is then determined to be
num_runs = max(runs_in_min_time, min_runs)
wheremin_runs
can be set via the--min-runs
option.
My initial idea to implement it in this way was the following: Any number of runs below min_runs
leads to very poor statistical results. The default value for min_runs
is 10, but users can choose to increase it (or decrease, if they really want). On the other hand, if a process is very fast, we don't want the user to wait too long for the benchmark results, so in this case we limit the benchmarking time to min_time
(Maybe we should make this configurable?).
Consequently, if you have a "long-running" process (where long-running means min_time / min_runs = 300 ms
by default), it will always run exactly min_runs
times.
I realize now that this should have been documented somewhere, but given these constraints, I don't quite see the need for a --max-runs
and --runs
option. The --max-runs
option would only be useful for fast-running processes where you want your benchmark to finish under 3 seconds. And the --runs
option would basically have the same effect as --min-runs
, except when you have a really fast-running process.
I'm open for new ideas about the general way we handle (the computation of) the number of runs, but I'm not convinced that --max-runs
and --runs
is the right way to go. What do you think?
from hyperfine.
Thank you for the feedback, you have some good points! I'll look into your PR soon.
from hyperfine.
Related Issues (20)
- How know which run is used to execute the command? HOT 1
- Better ETA estimation for long running benchmarks HOT 6
- Feature request: randomly sample from a parameter list HOT 1
- Bloated benchmark results HOT 5
- function check_index {
- Trouble with syntax and ripgrep (in fish shell) HOT 2
- Feature request: save data directories for each run (run index as format specifier in the command) HOT 1
- Support --time-unit microsecond HOT 3
- Add quieter markdown `stdout` export HOT 2
- Add short flag counterparts
- Add short flags for `--export` HOT 2
- hyperfine 1.18.0 not published at crates.io HOT 1
- Usage of `hyperfine` to benchmark code HOT 1
- ETA not clearly visible on terminals with a block cursor HOT 1
- run commands only once per set of parameters they use
- Always in this "Initial time measurement" state HOT 6
- Include parameters in output HOT 2
- T
- `-N` makes me can't pass options for command which'll be benchmarked. HOT 1
- Output cpu, disc and memory information HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hyperfine.