Comments (10)
the way I see it, the solution is
If the benchmark is not even an optimization problem, like the comparison of sparse recovery methods of @TheoGuyard, and you use a solver like OMP which does not solve an optimization problem, then it's not super what "solution" would refer too.
from benchopt.
Yes this could be a sub-optimal solution.
But I am more in favor of result
, as this is less specific to optimization than solution
.
from benchopt.
The automated call to get_result
is indeed a nice improvement that could be made as the same time, as well as letting evaluate_result
take an expanded dictionary as input.
from benchopt.
@francois-rozet what bothers me with get_solution
is that it's not a solution to the optimization problem. And we're heading for settings in which it's just the result of a method, not a solution.
What do you think of get_result/evaluate_result
?
from benchopt.
evaluate_result
is good as well. However, I am not sure to understand why get_result
does not return a solution. Semantically, a "solver" returns a "solution".
from benchopt.
from benchopt.
several people stated that it was confusing during the sprint, especially compared to get/set_dataset and get/set_objective
IMO it's not too late to change and makes onboarding easier, as #588
from benchopt.
This is not only cosmetic, this helps with documentation and API consistency (many said during the sprint that this was a confusing point). Thus we would like to change compute
to match the get_result
method. The question is what it the right wording. We also want to consider the get_one_solution
that is going to enter this function and the name of the object passed in the callback
, to make all of this consistent.
solution
is more natural withSolver
, but refers to optimization and hurts the semantic of @mathurinm 😆results
is more general (thinking of ML benchmarks), and the nice thing is we don't changeget_result
.
I am in favor of this solution with: get_result/get_one_result/evaluate_result/callback(**result)
. This will be more consistent.
What do you think @mathurinm , @agramfort , @francois-rozet
from benchopt.
from benchopt.
Both solution and result are fine to me, with a preference for solution to be consistent with Solver (btw a sub-optimal solution remains a solution, same for intermediate solution). The real improvement is changing compute
to evaluate_solution
or evaluate_result
.
P.S. if one should always call callback with result, maybe it is not necessary to force the user to give arguments to the callback. I currently write
while cb(self.get_result()):
in all my solvers, butwhile cb():
should be enough. It would also allow to not count the "result creation" time as part of the run.
from benchopt.
Related Issues (20)
- `from benchopt.base import BaseObjective, safe_import_context` found in doc snippets raise `ImportError`
- `pip install -U -i https://test.pypi.org/simple/benchopt` suggested in get started guide does not work HOT 2
- Example given in workflow documentation with installation from source doesn't actually feature an install from source
- [Feature request] Could it be possible to customize lookup paths in the CLI HOT 1
- [Feature request] Allow arbitrary naming of datasets, objectives and solvers classes ? HOT 1
- [Feature Request] CLI that prints all datasets, objectives, solvers, and available parameters, parameters than run by defaults, and accepted values for each parameters HOT 2
- BUG minimum version required HOT 4
- [BUG] clean command does not remove benchopt_run folder
- [Feature request] completely outsource the run
- Add Poisson regression benchmark HOT 2
- [BUG] ```benchopt install``` is unable to detect conda env HOT 8
- FEAT generate config.yml and attaching it in the HTML HOT 1
- BUG benchopt install fail for solver that depends on global requirements
- BUG neurips results are not available on benchopt.github.io/results HOT 1
- ENH expose results artifacts when submitting a PR to benchopt/results
- ENH Better error message for `get_result` return format
- ENH YAML config file with dictionaries
- ENH Easy access to result tensors
- BUG run_once strategy HOT 3
- Bug with Nested config files for solvers HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from benchopt.