Comments (7)
Hello @vanquanTRAN,
could you describe your problem in more detail?
Do you want to print this information in each iteration?
Do you really need the information at runtime? You could just get the search data after the run finished via hyper.results(model).
The more information you give about your problem and what you want to do, the better I can help you.
from hyperactive.
Hello Simon, thank you for your enthusiasm, in fact, your code can help us to verify the best score over each iteration.
But perhaps due to my mistake or the optimization algorithms are coded in different ways because should the best score such as R2
of iteration (n+1) th be greater or equal (at least) than that of n th, for example in the attached fig for PSO ? Could you explain?
Iteration 0: Best Cost = 0.7615630337611912
Iteration 1: Best Cost = 0.761734744272929
Iteration 2: Best Cost = 0.7665962344898513
Iteration 3: Best Cost = 0.7665962344898513
Iteration 4: Best Cost = 0.7665962344898513
Iteration 5: Best Cost = 0.7665962344898513
Iteration 6: Best Cost = 0.7665962344898513
Iteration 7: Best Cost = 0.7665962344898513
Iteration 8: Best Cost =0.7665962344898513
Iteration 9: Best Cost = 0.7665962344898513
Iteration 10: Best Cost = 0.7667057693160213
Iteration 11: Best Cost = 0.7670653200716335
from hyperactive.
Hello @vanquanTRAN,
thank you for going into so much detail! I am currently working on a new feature that should solve this problem.
I will give you an update (probably tomorrow) on how you can output this data during the optimization run.
from hyperactive.
Hello @vanquanTRAN,
I looked into a solution for your problem. I had to do some small additions to the optimization backend, so you should update it via: pip install gradient_free_optimizers==0.3.2
After the update you can run the example code below:
from sklearn.datasets import load_boston
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import cross_val_score
from hyperactive import Hyperactive
data = load_boston()
X, y = data.data, data.target
def model(para):
gbr = GradientBoostingRegressor(
n_estimators=para["n_estimators"],
max_depth=para["max_depth"],
min_samples_split=para["min_samples_split"],
)
scores = cross_val_score(gbr, X, y, cv=3)
print(
"Iteration:", para.optimizer.nth_iter, " Best score", para.optimizer.best_score
)
return scores.mean()
search_space = {
"n_estimators": list(range(10, 150, 5)),
"max_depth": list(range(2, 12)),
"min_samples_split": list(range(2, 22)),
}
hyper = Hyperactive()
hyper.add_search(model, search_space, n_iter=20)
hyper.run()
Please let me know if this solution works for you.
from hyperactive.
Thank you Simon, your solution is well done for me now, your work will be cited in my paper if it is published
Thank you
from hyperactive.
Very good @vanquanTRAN! I would appreciate a citation in your paper.
I will leave this issue open for now, because I have an additional new feature, that will be released within the next week.
from hyperactive.
Since v3.2.0 a streamlit-based dashboard for the visualization of the search data (automatically) collected during the optimization run has been added: Example
This should give enough flexibility to display information during the optimization progress.
from hyperactive.
Related Issues (20)
- ValueError: assignment destination is read-only HOT 3
- Dynamic inertia in ParticleSwarmOptimizer HOT 4
- Feature: Passing extra parameters to the optimization function HOT 5
- Optimization in serial? HOT 4
- New feature: save optimizer object to continue optimization run at a later time.
- hyper.results(model) HOT 1
- New feature: Optimization Strategies HOT 1
- add ray multiprocessing support
- Change Optimization paramters at runtime
- Show speed difference between python version
- Question of Particle Swarm Optimizer HOT 6
- Progress Bar visual error when running in parallel HOT 2
- Error when creating shared memory HOT 1
- TypeError: cannot pickle '_thread.RLock' object HOT 5
- Redesign command-line output of optimization run HOT 1
- Add early stopping feature to custom optimization strategies HOT 1
- Add type hints to hyperactive-api HOT 1
- add `prune_search_space`-method to optimization strategies HOT 1
- add constrained optimization to API HOT 1
- Stop verbosity of search HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hyperactive.