Coder Social home page Coder Social logo

automl / hpbandster Goto Github PK

View Code? Open in Web Editor NEW
603.0 26.0 113.0 7.34 MB

a distributed Hyperband implementation on Steroids

License: BSD 3-Clause "New" or "Revised" License

Python 100.00%
bayesian-optimization hyperparameter-optimization neural-architecture-search automated-machine-learning automl

hpbandster's Introduction

HpBandSter Build Status codecov

a distributed Hyperband implementation on Steroids

News: Not Maintained Anymore!

Please note that we don't maintain this repository anymore. We also cannot ensure that we can reply to issues in the issue tracker or look into PRs.

We offer two successor packages which showed in our HPOBench paper superior performance:

  1. SMAC3: is a versatile HPO package with different HPO strategies. It also implements the main idea of BOHB, but uses a RF (or GP) as a predictive model instead of a KDE.
  2. DEHB: is a HPO package using a combination of differential evolution and hyperband.

In particular, SMAC3 has an active group of developers working on it and maintaining it. So, we strongly recommend using one of these two packages instead of HPBandSter.

Overview

This python 3 package is a framework for distributed hyperparameter optimization. It started out as a simple implementation of Hyperband (Li et al. 2017), and contains an implementation of BOHB (Falkner et al. 2018)

How to install

We try to keep the package on PyPI up to date. So you should be able to install it via:

pip install hpbandster

If you want to develop on the code you could install it via:

python3 setup.py develop --user

Documentation

The documentation is hosted on github pages: https://automl.github.io/HpBandSter/ It contains a quickstart guide with worked out examples to get you started in different circumstances. Check it out if you are interest in applying one of the implemented optimizers to your problem.

We have also written a blogpost showcasing the results from our ICML paper.

hpbandster's People

Contributors

aaronkl avatar keggensperger avatar mfeurer avatar mlindauer avatar phmueller avatar separius avatar sfalkner avatar shukon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hpbandster's Issues

RoBo Dependency is missing

A simple
python setup.py install --user
should do. It should install the dependencies automatically.

Unfortunately, the RoBo dependency is missing -- at least to run your example_lcnet.py example.

None of the sampled configurations was model_based

Hi,

I found your library very interesting and I decided to give it a try. After first successful run I inspected the configs.json written by the json_result_logger class and I found out, that none of the tested configurations was sampled from the model (all configs had "model_based_pick: false"). Does it mean that all configurations were sampled randomly? Why is it so? I would expect that having model-based config pick should result in better configurations.

I would greatly appreciate your help.

No "model_based_picks"

Hi,
This issue seems much like #20, but I was not able to figure it out based on what I read there.

I am using BOBH with the following configurations

min_budget = 15
max_budget = 300  
n_iterations = 10
eta = 2

The optimisation progresses for 128 runs.
However, when I look into the resultìng configs.json file, no configuration is marked as model based pick. My config space has only 8 parameters, so in theory, there should be model based configurations after the number of samples has surpassed the number of parameters. See

configs.json

[[0, 0, 0], {"alpha": 0.524941863685754, "min_cases_in_past_periods": 17, "past_period_cutoff": 19, "power_transform": "none", "reweight": false, "trend": false, "window_half_size": 3, "years_back": 4}, {"model_based_pick": false}]
[[0, 0, 1], {"alpha": 0.014050776687360433, "min_cases_in_past_periods": 18, "past_period_cutoff": 4, "power_transform": "2/3", "reweight": true, "trend": true, "window_half_size": 8, "years_back": 1}, {"model_based_pick": false}]
[[0, 0, 2], {"alpha": 0.6823946565390872, "min_cases_in_past_periods": 20, "past_period_cutoff": 13, "power_transform": "1/2", "reweight": true, "trend": true, "window_half_size": 6, "years_back": 1}, {"model_based_pick": false}]
[[0, 0, 3], {"alpha": 0.33185521346027425, "min_cases_in_past_periods": 4, "past_period_cutoff": 17, "power_transform": "1/2", "reweight": true, "trend": false, "window_half_size": 7, "years_back": 2}, {"model_based_pick": false}]
[[0, 0, 4], {"alpha": 0.0770872141301081, "min_cases_in_past_periods": 20, "past_period_cutoff": 9, "power_transform": "2/3", "reweight": false, "trend": true, "window_half_size": 7, "years_back": 1}, {"model_based_pick": false}]
[[0, 0, 5], {"alpha": 0.0016518925802035955, "min_cases_in_past_periods": 11, "past_period_cutoff": 10, "power_transform": "1/2", "reweight": false, "trend": true, "window_half_size": 10, "years_back": 4}, {"model_based_pick": false}]
[[0, 0, 6], {"alpha": 0.1310824459173171, "min_cases_in_past_periods": 12, "past_period_cutoff": 1, "power_transform": "1/2", "reweight": false, "trend": true, "window_half_size": 8, "years_back": 1}, {"model_based_pick": false}]
[[0, 0, 7], {"alpha": 0.9400413364619, "min_cases_in_past_periods": 10, "past_period_cutoff": 5, "power_transform": "1/2", "reweight": false, "trend": true, "window_half_size": 2, "years_back": 4}, {"model_based_pick": false}]
[[0, 0, 8], {"alpha": 0.6752626901036773, "min_cases_in_past_periods": 1, "past_period_cutoff": 20, "power_transform": "1/2", "reweight": true, "trend": false, "window_half_size": 4, "years_back": 6}, {"model_based_pick": false}]
[[0, 0, 9], {"alpha": 0.047950183624705045, "min_cases_in_past_periods": 1, "past_period_cutoff": 8, "power_transform": "2/3", "reweight": true, "trend": false, "window_half_size": 1, "years_back": 2}, {"model_based_pick": false}]
[[0, 0, 10], {"alpha": 0.8396727695314329, "min_cases_in_past_periods": 12, "past_period_cutoff": 18, "power_transform": "2/3", "reweight": true, "trend": false, "window_half_size": 4, "years_back": 6}, {"model_based_pick": false}]
[[0, 0, 11], {"alpha": 0.5806941681640714, "min_cases_in_past_periods": 7, "past_period_cutoff": 6, "power_transform": "2/3", "reweight": false, "trend": false, "window_half_size": 8, "years_back": 3}, {"model_based_pick": false}]
[[0, 0, 12], {"alpha": 0.5626021980712306, "min_cases_in_past_periods": 10, "past_period_cutoff": 17, "power_transform": "2/3", "reweight": false, "trend": true, "window_half_size": 1, "years_back": 1}, {"model_based_pick": false}]
[[0, 0, 13], {"alpha": 0.8680047790128608, "min_cases_in_past_periods": 6, "past_period_cutoff": 1, "power_transform": "none", "reweight": false, "trend": false, "window_half_size": 4, "years_back": 6}, {"model_based_pick": false}]
[[0, 0, 14], {"alpha": 0.08211357957474463, "min_cases_in_past_periods": 3, "past_period_cutoff": 8, "power_transform": "1/2", "reweight": false, "trend": true, "window_half_size": 8, "years_back": 4}, {"model_based_pick": false}]
[[0, 0, 15], {"alpha": 0.16343877381177196, "min_cases_in_past_periods": 2, "past_period_cutoff": 17, "power_transform": "1/2", "reweight": true, "trend": false, "window_half_size": 9, "years_back": 3}, {"model_based_pick": false}]
[[1, 0, 0], {"alpha": 0.608581498475672, "min_cases_in_past_periods": 12, "past_period_cutoff": 10, "power_transform": "2/3", "reweight": true, "trend": true, "window_half_size": 9, "years_back": 2}, {"model_based_pick": false}]
[[1, 0, 1], {"alpha": 0.7805692551168152, "min_cases_in_past_periods": 13, "past_period_cutoff": 13, "power_transform": "2/3", "reweight": false, "trend": false, "window_half_size": 9, "years_back": 6}, {"model_based_pick": false}]
[[1, 0, 2], {"alpha": 0.7571335479905326, "min_cases_in_past_periods": 15, "past_period_cutoff": 4, "power_transform": "none", "reweight": false, "trend": false, "window_half_size": 1, "years_back": 4}, {"model_based_pick": false}]
[[1, 0, 3], {"alpha": 0.8241348058011784, "min_cases_in_past_periods": 6, "past_period_cutoff": 6, "power_transform": "2/3", "reweight": true, "trend": false, "window_half_size": 7, "years_back": 3}, {"model_based_pick": false}]
[[1, 0, 4], {"alpha": 0.9650298955804265, "min_cases_in_past_periods": 11, "past_period_cutoff": 8, "power_transform": "none", "reweight": false, "trend": false, "window_half_size": 3, "years_back": 6}, {"model_based_pick": false}]
[[1, 0, 5], {"alpha": 0.5075081024370277, "min_cases_in_past_periods": 10, "past_period_cutoff": 5, "power_transform": "none", "reweight": false, "trend": true, "window_half_size": 4, "years_back": 3}, {"model_based_pick": false}]
[[1, 0, 6], {"alpha": 0.9804635670593818, "min_cases_in_past_periods": 3, "past_period_cutoff": 15, "power_transform": "none", "reweight": true, "trend": false, "window_half_size": 9, "years_back": 2}, {"model_based_pick": false}]
[[1, 0, 7], {"alpha": 0.6311988898626136, "min_cases_in_past_periods": 12, "past_period_cutoff": 1, "power_transform": "1/2", "reweight": false, "trend": false, "window_half_size": 9, "years_back": 2}, {"model_based_pick": false}]
[[2, 0, 0], {"alpha": 0.32342707499369594, "min_cases_in_past_periods": 13, "past_period_cutoff": 14, "power_transform": "1/2", "reweight": false, "trend": false, "window_half_size": 10, "years_back": 5}, {"model_based_pick": false}]
[[2, 0, 1], {"alpha": 0.4583586440011973, "min_cases_in_past_periods": 5, "past_period_cutoff": 13, "power_transform": "none", "reweight": false, "trend": false, "window_half_size": 10, "years_back": 3}, {"model_based_pick": false}]
[[2, 0, 2], {"alpha": 0.7156338748367207, "min_cases_in_past_periods": 6, "past_period_cutoff": 5, "power_transform": "1/2", "reweight": false, "trend": true, "window_half_size": 2, "years_back": 2}, {"model_based_pick": false}]
[[2, 0, 3], {"alpha": 0.5657098806075379, "min_cases_in_past_periods": 0, "past_period_cutoff": 14, "power_transform": "2/3", "reweight": true, "trend": false, "window_half_size": 10, "years_back": 5}, {"model_based_pick": false}]
[[3, 0, 0], {"alpha": 0.3363127940737737, "min_cases_in_past_periods": 8, "past_period_cutoff": 20, "power_transform": "2/3", "reweight": false, "trend": true, "window_half_size": 5, "years_back": 2}, {"model_based_pick": false}]
[[3, 0, 1], {"alpha": 0.5242762917835478, "min_cases_in_past_periods": 3, "past_period_cutoff": 4, "power_transform": "2/3", "reweight": true, "trend": false, "window_half_size": 3, "years_back": 5}, {"model_based_pick": false}]
[[3, 0, 2], {"alpha": 0.5016393195652192, "min_cases_in_past_periods": 3, "past_period_cutoff": 4, "power_transform": "2/3", "reweight": false, "trend": false, "window_half_size": 3, "years_back": 5}, {"model_based_pick": false}]
[[3, 0, 3], {"alpha": 0.39914393660289926, "min_cases_in_past_periods": 12, "past_period_cutoff": 2, "power_transform": "1/2", "reweight": false, "trend": false, "window_half_size": 5, "years_back": 3}, {"model_based_pick": false}]
[[4, 0, 0], {"alpha": 0.05534493517982253, "min_cases_in_past_periods": 9, "past_period_cutoff": 15, "power_transform": "2/3", "reweight": true, "trend": false, "window_half_size": 6, "years_back": 4}, {"model_based_pick": false}]
[[4, 0, 1], {"alpha": 0.13622099670892762, "min_cases_in_past_periods": 19, "past_period_cutoff": 6, "power_transform": "none", "reweight": false, "trend": false, "window_half_size": 5, "years_back": 2}, {"model_based_pick": false}]
[[4, 0, 2], {"alpha": 0.12018820945777231, "min_cases_in_past_periods": 20, "past_period_cutoff": 2, "power_transform": "1/2", "reweight": true, "trend": true, "window_half_size": 8, "years_back": 1}, {"model_based_pick": false}]
[[4, 0, 3], {"alpha": 0.7712294660119288, "min_cases_in_past_periods": 18, "past_period_cutoff": 2, "power_transform": "1/2", "reweight": false, "trend": true, "window_half_size": 4, "years_back": 5}, {"model_based_pick": false}]
[[4, 0, 4], {"alpha": 0.17085267135509474, "min_cases_in_past_periods": 2, "past_period_cutoff": 19, "power_transform": "1/2", "reweight": true, "trend": false, "window_half_size": 4, "years_back": 3}, {"model_based_pick": false}]
[[5, 0, 0], {"alpha": 0.6406725809981577, "min_cases_in_past_periods": 7, "past_period_cutoff": 6, "power_transform": "none", "reweight": true, "trend": false, "window_half_size": 1, "years_back": 1}, {"model_based_pick": false}]
[[5, 0, 1], {"alpha": 0.07893077231480505, "min_cases_in_past_periods": 0, "past_period_cutoff": 3, "power_transform": "none", "reweight": false, "trend": true, "window_half_size": 7, "years_back": 5}, {"model_based_pick": false}]
[[5, 0, 2], {"alpha": 0.6481148805587776, "min_cases_in_past_periods": 5, "past_period_cutoff": 10, "power_transform": "2/3", "reweight": false, "trend": false, "window_half_size": 10, "years_back": 2}, {"model_based_pick": false}]
[[5, 0, 3], {"alpha": 0.4689859978145161, "min_cases_in_past_periods": 3, "past_period_cutoff": 16, "power_transform": "none", "reweight": false, "trend": false, "window_half_size": 2, "years_back": 2}, {"model_based_pick": false}]
[[5, 0, 4], {"alpha": 0.9444550057906, "min_cases_in_past_periods": 17, "past_period_cutoff": 16, "power_transform": "none", "reweight": true, "trend": false, "window_half_size": 1, "years_back": 3}, {"model_based_pick": false}]
[[5, 0, 5], {"alpha": 0.525204077370409, "min_cases_in_past_periods": 4, "past_period_cutoff": 15, "power_transform": "2/3", "reweight": false, "trend": true, "window_half_size": 7, "years_back": 3}, {"model_based_pick": false}]
[[5, 0, 6], {"alpha": 0.15001031989028013, "min_cases_in_past_periods": 1, "past_period_cutoff": 12, "power_transform": "2/3", "reweight": false, "trend": false, "window_half_size": 5, "years_back": 1}, {"model_based_pick": false}]
[[5, 0, 7], {"alpha": 0.4807526236742411, "min_cases_in_past_periods": 20, "past_period_cutoff": 16, "power_transform": "1/2", "reweight": false, "trend": true, "window_half_size": 3, "years_back": 6}, {"model_based_pick": false}]
[[5, 0, 8], {"alpha": 0.16851233413293298, "min_cases_in_past_periods": 20, "past_period_cutoff": 13, "power_transform": "2/3", "reweight": false, "trend": true, "window_half_size": 5, "years_back": 2}, {"model_based_pick": false}]
[[5, 0, 9], {"alpha": 0.4679955457055063, "min_cases_in_past_periods": 3, "past_period_cutoff": 13, "power_transform": "1/2", "reweight": true, "trend": true, "window_half_size": 10, "years_back": 2}, {"model_based_pick": false}]
[[5, 0, 10], {"alpha": 0.2290977267246188, "min_cases_in_past_periods": 0, "past_period_cutoff": 6, "power_transform": "2/3", "reweight": false, "trend": false, "window_half_size": 4, "years_back": 1}, {"model_based_pick": false}]
[[5, 0, 11], {"alpha": 0.4584081335897201, "min_cases_in_past_periods": 17, "past_period_cutoff": 15, "power_transform": "1/2", "reweight": false, "trend": true, "window_half_size": 1, "years_back": 4}, {"model_based_pick": false}]
[[5, 0, 12], {"alpha": 0.17442366724373015, "min_cases_in_past_periods": 9, "past_period_cutoff": 19, "power_transform": "2/3", "reweight": false, "trend": false, "window_half_size": 8, "years_back": 1}, {"model_based_pick": false}]
[[5, 0, 13], {"alpha": 0.6437935704153152, "min_cases_in_past_periods": 17, "past_period_cutoff": 13, "power_transform": "1/2", "reweight": false, "trend": false, "window_half_size": 10, "years_back": 3}, {"model_based_pick": false}]
[[5, 0, 14], {"alpha": 0.45499124225140597, "min_cases_in_past_periods": 10, "past_period_cutoff": 20, "power_transform": "none", "reweight": false, "trend": false, "window_half_size": 5, "years_back": 1}, {"model_based_pick": false}]
[[5, 0, 15], {"alpha": 0.6781770477977368, "min_cases_in_past_periods": 19, "past_period_cutoff": 15, "power_transform": "2/3", "reweight": true, "trend": true, "window_half_size": 9, "years_back": 1}, {"model_based_pick": false}]
[[6, 0, 0], {"alpha": 0.49315023488468834, "min_cases_in_past_periods": 1, "past_period_cutoff": 16, "power_transform": "2/3", "reweight": false, "trend": false, "window_half_size": 8, "years_back": 5}, {"model_based_pick": false}]
[[6, 0, 1], {"alpha": 0.6688362696998511, "min_cases_in_past_periods": 15, "past_period_cutoff": 6, "power_transform": "none", "reweight": false, "trend": false, "window_half_size": 3, "years_back": 4}, {"model_based_pick": false}]
[[6, 0, 2], {"alpha": 0.3817554102886793, "min_cases_in_past_periods": 2, "past_period_cutoff": 9, "power_transform": "1/2", "reweight": true, "trend": true, "window_half_size": 4, "years_back": 6}, {"model_based_pick": false}]
[[6, 0, 3], {"alpha": 0.7222560188343627, "min_cases_in_past_periods": 11, "past_period_cutoff": 1, "power_transform": "1/2", "reweight": true, "trend": true, "window_half_size": 4, "years_back": 4}, {"model_based_pick": false}]
[[6, 0, 4], {"alpha": 0.7218150088590636, "min_cases_in_past_periods": 8, "past_period_cutoff": 12, "power_transform": "none", "reweight": true, "trend": false, "window_half_size": 8, "years_back": 3}, {"model_based_pick": false}]
[[6, 0, 5], {"alpha": 0.6110155597689844, "min_cases_in_past_periods": 18, "past_period_cutoff": 19, "power_transform": "2/3", "reweight": true, "trend": true, "window_half_size": 4, "years_back": 4}, {"model_based_pick": false}]
[[6, 0, 6], {"alpha": 0.0030345783702061535, "min_cases_in_past_periods": 18, "past_period_cutoff": 14, "power_transform": "1/2", "reweight": true, "trend": true, "window_half_size": 7, "years_back": 5}, {"model_based_pick": false}]
[[6, 0, 7], {"alpha": 0.1524861965827966, "min_cases_in_past_periods": 3, "past_period_cutoff": 7, "power_transform": "1/2", "reweight": false, "trend": true, "window_half_size": 5, "years_back": 3}, {"model_based_pick": false}]
[[7, 0, 0], {"alpha": 0.9582654803943511, "min_cases_in_past_periods": 15, "past_period_cutoff": 10, "power_transform": "none", "reweight": false, "trend": true, "window_half_size": 2, "years_back": 2}, {"model_based_pick": false}]
[[7, 0, 1], {"alpha": 0.8477660382632409, "min_cases_in_past_periods": 20, "past_period_cutoff": 19, "power_transform": "1/2", "reweight": true, "trend": true, "window_half_size": 1, "years_back": 6}, {"model_based_pick": false}]
[[7, 0, 2], {"alpha": 0.0625928283257986, "min_cases_in_past_periods": 20, "past_period_cutoff": 10, "power_transform": "1/2", "reweight": false, "trend": false, "window_half_size": 9, "years_back": 4}, {"model_based_pick": false}]
[[7, 0, 3], {"alpha": 0.6036484764104649, "min_cases_in_past_periods": 5, "past_period_cutoff": 8, "power_transform": "1/2", "reweight": true, "trend": true, "window_half_size": 5, "years_back": 2}, {"model_based_pick": false}]
[[8, 0, 0], {"alpha": 0.7673416452088619, "min_cases_in_past_periods": 19, "past_period_cutoff": 10, "power_transform": "none", "reweight": true, "trend": false, "window_half_size": 4, "years_back": 6}, {"model_based_pick": false}]
[[8, 0, 1], {"alpha": 0.6104863905502328, "min_cases_in_past_periods": 0, "past_period_cutoff": 7, "power_transform": "1/2", "reweight": true, "trend": true, "window_half_size": 1, "years_back": 1}, {"model_based_pick": false}]
[[8, 0, 2], {"alpha": 0.42347651180735446, "min_cases_in_past_periods": 9, "past_period_cutoff": 10, "power_transform": "1/2", "reweight": true, "trend": true, "window_half_size": 7, "years_back": 3}, {"model_based_pick": false}]
[[8, 0, 3], {"alpha": 0.2852098935148425, "min_cases_in_past_periods": 12, "past_period_cutoff": 13, "power_transform": "1/2", "reweight": false, "trend": true, "window_half_size": 7, "years_back": 3}, {"model_based_pick": false}]
[[9, 0, 0], {"alpha": 0.3703694721375741, "min_cases_in_past_periods": 19, "past_period_cutoff": 18, "power_transform": "1/2", "reweight": true, "trend": false, "window_half_size": 10, "years_back": 5}, {"model_based_pick": false}]
[[9, 0, 1], {"alpha": 0.6381241720517308, "min_cases_in_past_periods": 20, "past_period_cutoff": 5, "power_transform": "none", "reweight": true, "trend": false, "window_half_size": 6, "years_back": 4}, {"model_based_pick": false}]
[[9, 0, 2], {"alpha": 0.09780255550621686, "min_cases_in_past_periods": 19, "past_period_cutoff": 20, "power_transform": "2/3", "reweight": false, "trend": true, "window_half_size": 2, "years_back": 1}, {"model_based_pick": false}]
[[9, 0, 3], {"alpha": 0.7338647016940335, "min_cases_in_past_periods": 3, "past_period_cutoff": 10, "power_transform": "2/3", "reweight": true, "trend": false, "window_half_size": 9, "years_back": 4}, {"model_based_pick": false}]
[[9, 0, 4], {"alpha": 0.46763338062020066, "min_cases_in_past_periods": 11, "past_period_cutoff": 18, "power_transform": "2/3", "reweight": false, "trend": true, "window_half_size": 3, "years_back": 6}, {"model_based_pick": false}]

configspace

def get_configspace():
    cs = CS.ConfigurationSpace()
    cs.add_hyperparameters([
        CSH.UniformIntegerHyperparameter('years_back', lower=1, upper=6),
        CSH.UniformIntegerHyperparameter('window_half_size', lower=1, upper=10),
        CSH.CategoricalHyperparameter('reweight', choices=[True, False]),
        CSH.UniformFloatHyperparameter('alpha', lower=0, upper=1),
        CSH.CategoricalHyperparameter('trend', choices=[True, False]),
        CSH.UniformIntegerHyperparameter('past_period_cutoff', lower=1, upper=20),
        CSH.UniformIntegerHyperparameter('min_cases_in_past_periods', lower=0, upper=20),
        CSH.CategoricalHyperparameter('power_transform', choices=['2/3', '1/2', 'none']),
    ])

    return cs

Also I would like to know if it is possible to increase the total number of runs. Should one just increase the number of iterations? What exactly comprises an iteration? Is it one complete round of SuccessiveHalfing on all bugets?

Any help is greatly appreciated.

Specify random seed

How do I specify a random seed for BOHB?
Looking at the source code it seems that this is not possible.

`n_good` and `n_bad` in BOHB configuration sampler

In the paper, Eq3 says that the number of good and bad configurations should be:

Nb_l = max(N_min, q * Nb)
Nb_g = max(N_min, Nb - Nb_l)

which means that when the number of observations is less than 2 * N_min, the good and bad observations overlap. However, in the code, we have (essentially)

train_data_good = train_configs[idx[:n_good]]
train_data_bad  = train_configs[idx[n_good:n_good+n_bad]]

which means
a) configs never overlap
b) sometimes len(train_data_bad) is less than min_points_in_model.

Does that seem right? A fix would be to change the lines in question to

train_data_good = train_configs[idx[:n_good]]

bad_start       = min(train_configs.shape[0] - n_bad, n_good)
bad_end         = bad_start + n_bad
train_data_bad  = train_configs[idx[bad_start:bad_end]]

But maybe this doesn't matter? Is there some suite of experiments that we could use to make sure we still get good performance from this kind of change?

~ Ben

EDIT: Fixed the code suggestion.
EDIT 2: Actually, on second thought -- I'm not sure I understand the intention in the paper. It says choose the n_good and n_bad "best and worst configurations, respectively." Can someone maybe elaborate?

small error in example 5 - Pytorch

it should be

if __name__ == "__main__": worker = PyTorchWorker(run_id='0')

instead of

if __name__ == "__main__": worker = KerasWorker(run_id='0')

in the example.

ICML experiments w/ varying number of workers

In the ICML paper, you have plots that show the performance of BOHB w/ varying number of workers.

I see the code to replicate the single-threaded experiments in the icml_2018 branch -- but is there a code or an example that shows how we'd expect the system to scale w/ an increasing number of workers?

Thanks!

Sampling based optimization of the acquisition function

Could you please explain the method by which samples are drawn in your library? I see you have:

vector.append(sps.truncnorm.rvs(-m/bw,(1-m)/bw, loc=m, scale=bw))

I'm confused about two things. In the paper, you say that the kernels for the KDE are Gaussian, but here you are sampling a truncated Gaussian. Second, I don't understand the truncation limits. It seems that the way you have it, you will only ever sample points less than m. I apologize if I'm misunderstanding something.

I appreciate your help!

The bash file from Sphinx documentation of hpbandster used in example 4 does not work properly.

# submit via qsub -t 1-4 -q test_core.q example_4_cluster_submit_me.sh

#$ -cwd
#$ -o $JOB_ID-$TASK_ID.o
#$ -e $JOB_ID-$TASK_ID.e

# enter the virtual environment
source ~sfalkner/virtualenvs/HpBandSter_tests/bin/activate


if [ $SGE_TASK_ID -eq 1]
   then python3 example_4_cluster.py --run_id $JOB_ID --nic_name eth0 --working_dir .
else
   python3 example_4_cluster.py --run_id $JOB_ID --nic_name eth0  --working_dir . --worker
fi

Race condition for super cheap computations

When we run a super cheap surrogate benchmark some runs do not terminate probably due to a race condition between the master and the dispatcher during the result registration process.

Greetings
Stefan

Scipy.misc.factorial issue

scipy recently moved factorial function from scipy.misc to scipy.special. Would you do a quick fix on the HpBandSter code to update it as well? Right now I can't use the package due to this issue.

Invalid configuration result

Hi, really interesting hyperparameter optimization framework, congrats! I read through the paper and went through the code, but I am still struggling to understand what happens when an invalid configuration is sampled. Is the error from the worker caught and that configuration is assigned infinite loss and the worker advances to a new iteration? If so, that means that for a given number of iterations, it can happen that the initial number of configurations evaluated on a given budget can be less than what was supposed to be, right?

Also, if I set the minimum and maximum budgets to the same value I can switch to normal BO (without Hyperband)?

Thanks!

Cannot warmstart when some configurations raised exceptions

Consider warmstarting from the evaluations present in some results.json file and some of the sampled evaluations during optimization raised an error (e.g. OutOfMemory error) and the following line is appended to the results.json for this configuration and a np.inf loss is given to this configuration to mark it as a bad one:

[[10, 0, 0], 38.0, {"finished": 1544192253.4073746, "started": 1544191887.3157759, "submitted": 1544191887.3152575}, null, "Traceback (most recent call last):\n File \"/home/zelaa/HpBandSter/hpbandster/core/worker.py\", line 206, in start_computation\n result = {'result': self.compute(*args, config_id=id, **kwargs),\n File \"/home/zelaa/Thesis/bohb-darts/cond/workers/darts_worker.py\", line 35, in compute\n darts_source=self.darts_path)\n File \"/home/zelaa/Thesis/bohb-darts/cond/workers/helper.py\", line 81, in darts_cifar10\n subprocess.check_call(\" \".join(bash_strings), shell=True)\n File \"/home/zelaa/anaconda3/envs/pytorch-0.3.1-cu8/lib/python3.5/subprocess.py\", line 271, in check_call\n raise CalledProcessError(retcode, cmd)\nsubprocess.CalledProcessError: Command 'cd /home/zelaa/Thesis/bohb-darts/cond/workers/lib/darts_space; python train.py --cutout --auxiliary --save /home/zelaa/Thesis/bohb-darts/cond/data/BOHB/run_4420/10_0_0 --epochs 38 --edge_normal_0 sep_conv_5x5' returned non-zero exit status 1\n"]

When trying to warmstart line 133 in core/base_iteration.py will raise a TypeError exception since result['loss'] = None for the aforementioned configuration. Adding (not result['loss'] is None) to the condition in line 133 and capturing the exception in line 292 in optimizers/config_generators/bohb.py as follow:

try:
    loss = job.result["loss"] if np.isfinite(job.result["loss"]) else np.inf
except TypeError:
    loss = np.inf

will remove the issue and BOHB will warmstart succesfully.

Learning phase error with batchnormalization

Hello All,

I am working with using this framework to optimize deep learning architectures. I am running into issues when I add batch normalization and use the BOHB optimizer. To make it a simple as possible I reproduced the problem in example 5 and have added batch normalization.

When running the models individually using a random selection they always work. However when I run the example_5_mnist using the keras worker that includes batch normalization with the BOHB optimizer I frequently get this error in the results output for models:
"
c_api.TF_GetCode(self.status.status))\ntensorflow.python.framework.errors_impl.InvalidArgumentError: Tensor dropout_1/keras_learning_phase:0, specified in either feed_devices or fetch_devices was not found in the Graph\n
"
If I remove batch normalization from the example_5_keras_worker, then I do not have such errors.

I have attached the new worker file in case that helps (as a txt file for comparability). You can see that the batch normalization addition in lines 86-87, 94-95, 102-103, 172-174, and 195-199.
example_5_keras_worker_BN.txt

I have yet to track down the problem, but thought I'd post here in case someone else has ran into the same issue. If no one posts an answer and if I find what is going on, I'll be sure to post the solution.

Thanks all!

optimizer for tasks with no meaningful budgets

Hello!
I'm trying to use HpBandSter to do classical BO in a distributed fashion.
What's the best way to do optimization for tasks for which to such thing as a budget can be defined? I see there is RandomSearch optimizer, but is there a way to use model-based one? Of course one can use BOHB with fictive min_budget and max_budget values, but looks like that way several models will be trained (for different budgets), which is probably suboptimal.

[Feature Request] Add callbacks option

First off, great job on the library!

Secondly, I think it would be a good idea to have an option to define callbacks.
For example, I would like to log the progress live to Neptune tracking tool.

I made this example experiment public so you can go there if you want. Full script is available here.

image

Anyhow, what I ended up doing was:

class NeptuneLogger:
    def new_config(self, *args, **kwargs):
        pass
    
    def __call__(self, job):
        neptune.send_metric('run_score', job.result['loss'])
        neptune.send_text('run_parameters', str(job.kwargs['config']))

...

optim = BOHB(configspace = worker.get_configspace(),
                 run_id = RUN_ID,
                 nameserver=ns_host, 
                 nameserver_port=ns_port,
                 result_logger=NeptuneLogger())

It gets the job done but having to define this new_config method is weird.
Also, if I had more callbacks that I wanted to do, it would be tricky.

I think a good place to either have a separate callbacks argument that accepts a list of callbacks (callables on the job) or have an option to register them on an initiated optimizer:

def neptune_callback(job):
     neptune.send_metric('run_score', job.result['loss'])
     neptune.send_text('run_parameters', str(job.kwargs['config']))
...
optim.register_callbacks([neptune_callback])

What do you think?

Running in AWS

I tried running this codebase in AWS but encountered an issue with Pyro not working across several AWS instances (i.e. one instance running the dispatcher and many instances acting as workers). It seems there is a communication issue. Have you ever encountered anything like this? If so, do you have any suggestions?

I really appreciate all of your help!

bug in get_pandas_dataframe() function

For some runs with more than one config space, the get_pandas_dataframe() will carry out the hyperparameters of the last one for all three (in the screen shot for example)

In the screen shot, the run id (0,0,12) has 3 runs with budget 74, 222, 666
screen shot 2019-02-19 at 1 55 02 pm

But when I get the data using get_pandas_dataframe(), it only gives the information of the last config space with budget 666 as you can see in the fourth column.

screen shot 2019-02-19 at 1 55 09 pm

CloudMl

Hello,

So I have been trying to run a google couldml training job using hpbandster over the last couple of days with no success.

The main problem is that whenever the instance offloads work onto the GPU, hpbandster just hangs.
If I comment out the "session" part of my neural network implementation (tensorflow) and just return zeros, everything works fine.
Do you have any idea how I can work around this issue?

BOHB may try duplicate hyper-parameters in the same SH round with the same budget

When I use BOHB to optimize hyperparameter tuning, I found that in the same SH round with the same budget, it may have same hyper-parameter configurations.

For example, here is the parameter that chosen by my experiment using BOHB:

{"optimizer": "Adagrad", "model": "mobilenet", "lr": 0.0001, "budget": 81},
{"optimizer": "Adagrad", "model": "mobilenet", "lr": 0.0001, "budget": 81},
{"optimizer": "Adagrad", "model": "dpn92", "lr": 0.001, "budget": 81},
{"optimizer": "Adagrad", "model": "mobilenet", "lr": 0.0001, "budget": 81},
{"optimizer": "Adadelta", "model": "dpn92", "lr": 0.0001, "budget": 81},

Actually, {"optimizer": "Adagrad", "model": "mobilenet", "lr": 0.0001} doesn't performing well, therefore, repeated selection wastes lots of computing resources, is there any ways to help me avoid this situation?

Thanks so much!

mnist example crashes

running the example_5_mnist.py results in the following error message:

TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("conv2d_1_input:0", shape=(?, 28, 28, 1), dtype=float32) is not an element of this graph.

Example 3 - local parallel processes - stuck in a loop?

When I attempt to run on Ubuntu, the program hpbandster/examples/example_3_local_parallel_processes.py, it seems stuck printing the following messages:

INFO:hpbandster:DISPATCHER: started the 'discover_worker' thread
INFO:hpbandster:DISPATCHER: started the 'job_runner' thread
INFO:hpbandster:DISPATCHER: Pyro daemon running on localhost:42327
INFO:hpbandster:DISPATCHER: A new worker triggered discover_worker
INFO:hpbandster:DISPATCHER: A new worker triggered discover_worker
INFO:hpbandster:DISPATCHER: A new worker triggered discover_worker
INFO:hpbandster:DISPATCHER: A new worker triggered discover_worker
INFO:hpbandster:DISPATCHER: A new worker triggered discover_worker

The last line INFO:hpbandster:... keeps repeating (I counted ~250 times before I killed the code). Am I missing a step somewhere?

Multiple optimizations on different GPUs

Hello and thank you for this very interesting tool!

I have been trying to run several optimizations in parallel using different GPUs, i.e tune several NN models, each one using another GPU.
I tried to do this similarly to example_3, by assigning a new worker to each model I want to tune.
However, I need to give the same run_id to each worker that I gave to the master, otherwise the worker is stack listening for jobs.
I am afraid this way however, each model will influence the other because of the same run_id in the optimizer (BOHB).

Is there a another (correct) way to do this?

Thank you very much in advance

Confusion about successive halving implementation

I am having trouble understanding one aspect of this codebase. This software is built upon the successive halving algorithm, where only a subset of a given set of configurations are kept running at each iteration. I am confused about how this is managed? Which class/function is responsible for storing the state of a given model for a given configuration such that when the worker terminates and the optimizer chooses the given model/configuration to continue training, the parameters can be restored?

Running multiple Jobs on HPC using Slurm

Hello,

I already tried to run several jobs on a cluster. The jobs are running on the server but the output files are always empty. I would be grateful if you could help me,

Thank you in advance.

TSC.txt

Checkpointing best practices

I think it would be really helpful to have a page describing best practices for model/result checkpointing to avoid recomputing early parts of the training when a configuration is being computed at a higher budget. It was not obvious to me that this is something that should probably be done until after reading the Hyperband paper more carefully and fully understanding the algorithm.

For example:

  • If budget is related to epochs of training for a neural network, it would be good to keep checkpoints of models that have been trained up to the highest budget requested of that config so far.
  • If budget is related to number of cross-validation folds, it would be good to keep track of average validation score for folds already computed at highest budget requested of that config so far.

However, I am not sure the best way to accomplish this in a multi-node setup (reading/writing pickled objects over the network to a central storage location?) Also, it seems like the SuccesiveHalving subroutine would be able to identify when a configuration has been eliminated from further consideration (won't be asked to evaluate at a higher budget) at which point the checkpoint could be deleted.

I think ultimately it would be really nice if the Worker.compute method received checkpoint data (if available) as part of its input and the Worker returns checkpoint data as part of its return value on the compute method and the rest (storing highest budget relevant checkpoints, deleting unneeded checkpoints) was handled by Master. The Worker would just be responsible for taking the checkpoint data and using it to warm start it's compute process for the higher budget and then returning updated checkpoint data after computing the higher budget (if less than max_budget).

Failure handling

How does BOHB handle failures currently? AFAIK, they are just considered as bad configurations.
It would be nice if specific exceptions (Eg: IO or network connectivity errors ) can be handled with a min number of retries.

Debug difficulty

I was passing an unexpected output to BO-HB (worker part) and it was not progressing without reporting an error. The job was still running.

When I would like to find good configurations for log distributional variables

Hi, I would like to optimize the configuration which includes some variables which should be sampled after converted into log scale.

I checked your codes and found there I could choose 4 types of variable: "Continuous", "Ordinal", "Integer", "U"(categorical). I would like to make the points clear.

In the former hyperopt or Optuna offered by PFN, I can choose the distributions from "Uniform", "Discrete", "LogUniform", "Categorical". In the HpBandSter, how can I handle with these kind of variables??

I'm looking forward to hearing from you.

shuhei

Cython dependency

Missing Cython dependency in requirements while running:
python setup.py install --user

example 4 won't finish

I have made bare minimum changes to the example 4 so it can run on my SGE setup but the scripts never finish running! I am a bit lost here. Could you please help me?

Here is the output that I am getting (log of all 4 array jobs on SGE):
logs.txt

Here is the bash file that I used:

# submit via qsub -t 1-4 -q test_core.q example_4_cluster_submit_me.sh

#$ -cwd
#$ -o $JOB_ID-$TASK_ID.o
#$ -e $JOB_ID-$TASK_ID.e


echo $SGE_TASK_ID
if [ $SGE_TASK_ID -eq "1" ]
   then /idiap/home/amohammadi/git/deep/bin/python \
    /idiap/home/amohammadi/git/deep/src/bob.thesis.amohammadi/bob/thesis/amohammadi/hpbandster/example_4_cluster.py \
    --run_id $JOB_ID --nic_name eth0 --shared_directory ./hpbandster
else
   /idiap/home/amohammadi/git/deep/bin/python \
    /idiap/home/amohammadi/git/deep/src/bob.thesis.amohammadi/bob/thesis/amohammadi/hpbandster/example_4_cluster.py \
    --run_id $JOB_ID --nic_name eth0  --shared_directory ./hpbandster --worker
fi

and here is the python file:

import logging
logging.basicConfig(level=logging.DEBUG)
mpl_logger = logging.getLogger('matplotlib')
mpl_logger.setLevel(logging.WARNING)
import argparse
import pickle
import time

import hpbandster.core.nameserver as hpns
import hpbandster.core.result as hpres

from hpbandster.optimizers import BOHB as BOHB
from hpbandster.examples.commons import MyWorker



parser = argparse.ArgumentParser(description='Example 1 - sequential and local execution.')
parser.add_argument('--min_budget',   type=float, help='Minimum budget used during the optimization.',    default=9)
parser.add_argument('--max_budget',   type=float, help='Maximum budget used during the optimization.',    default=243)
parser.add_argument('--n_iterations', type=int,   help='Number of iterations performed by the optimizer', default=4)
parser.add_argument('--n_workers', type=int,   help='Number of workers to run in parallel.', default=2)
parser.add_argument('--worker', help='Flag to turn this into a worker process', action='store_true')
parser.add_argument('--run_id', type=str, help='A unique run id for this optimization run. An easy option is to use the job id of the clusters scheduler.')
parser.add_argument('--nic_name',type=str, help='Which network interface to use for communication.')
parser.add_argument('--shared_directory',type=str, help='A directory that is accessible for all processes, e.g. a NFS share.')


args=parser.parse_args()

# Every process has to lookup the hostname
host = hpns.nic_name_to_host(args.nic_name)


if args.worker:
    time.sleep(5)   # short artificial delay to make sure the nameserver is already running
    w = MyWorker(sleep_interval = 0.5,run_id=args.run_id, host=host)
    w.load_nameserver_credentials(working_directory=args.shared_directory)
    w.run(background=False)
    exit(0)

# Start a nameserver:
# We now start the nameserver with the host name from above and a random open port (by setting the port to 0)
NS = hpns.NameServer(run_id=args.run_id, host=host, port=0, working_directory=args.shared_directory)
ns_host, ns_port = NS.start()

# Most optimizers are so computationally inexpensive that we can affort to run a
# worker in parallel to it. Note that this one has to run in the background to
# not plock!
w = MyWorker(sleep_interval = 0.5,run_id=args.run_id, host=host, nameserver=ns_host, nameserver_port=ns_port)
w.run(background=True)

# Run an optimizer
# We now have to specify the host, and the nameserver information
bohb = BOHB(  configspace = MyWorker.get_configspace(),
                      run_id = args.run_id,
                      host=host,
                      nameserver=ns_host,
                      nameserver_port=ns_port,
                      min_budget=args.min_budget, max_budget=args.max_budget
               )
res = bohb.run(n_iterations=args.n_iterations, min_n_workers=args.n_workers)


# In a cluster environment, you usually want to store the results for later analysis.
# One option is to simply pickle the Result object
with open(os.path.join(args.shared_directory, 'results.pkl'), 'wb') as fh:
    pickle.dump(res, fh)


# Step 4: Shutdown
# After the optimizer run, we must shutdown the master and the nameserver.
bohb.shutdown(shutdown_workers=True)
NS.shutdown()

example 2 won't finish

Hi,
thanks for the code, looks very interesting.

I am trying to run a variant of example 2. I also use logging with
result_logger = hpres.json_result_logger(directory=working_directory, overwrite=True)
that I pass to BOHB(...result_logger=result_logger,...)

There are 2 problems:

  1. The logger produces config.json and results.json but the latter will typically miss results for all but one initial [0,num_workers) configurations, but will show results for [num_workers, n_iterations) configurations.
  2. After the optimizer goes over all configurations it just hangs, waiting for workers:

DEBUG:hpbandster:HBMASTER: Trying to run another job!
DEBUG:hpbandster:job_callback for (19, 0, 0) finished
DEBUG:hpbandster:DISPATCHER: Starting worker discovery
DEBUG:hpbandster:DISPATCHER: Found 8 potential workers, 8 currently in the pool.
DEBUG:hpbandster:DISPATCHER: Finished worker discovery
DEBUG:hpbandster:DISPATCHER: Starting worker discovery
DEBUG:hpbandster:DISPATCHER: Found 8 potential workers, 8 currently in the pool.
DEBUG:hpbandster:DISPATCHER: Finished worker discovery
DEBUG:hpbandster:DISPATCHER: Starting worker discovery
DEBUG:hpbandster:DISPATCHER: Found 8 potential workers, 8 currently in the pool.
DEBUG:hpbandster:DISPATCHER: Finished worker discovery
DEBUG:hpbandster:DISPATCHER: Starting worker discovery
DEBUG:hpbandster:DISPATCHER: Found 8 potential workers, 8 currently in the pool.
DEBUG:hpbandster:DISPATCHER: Finished worker discovery
DEBUG:hpbandster:DISPATCHER: Starting worker discovery
DEBUG:hpbandster:DISPATCHER: Found 8 potential workers, 8 currently in the pool.
DEBUG:hpbandster:DISPATCHER: Finished worker discovery
DEBUG:hpbandster:DISPATCHER: Starting worker discovery
............

and so on until I kill the processes.

Am I missing something: is there a way to exit gracefully?

Thanks!

Saving the state?

I am surveying different packages for hyperparameter optimization, and HpBandSter seems promising, especially becaues of its support for distributed training. But one thing I haven't had a clue is how the master handles interruption. Typically training a model takes a long time, so the master should be alive for even longer (it must outlive all workers combined). But what happens when the master crashes/is preempted?

random search docstring

Why does hpbandster.optimizer.RandomSearch need a min_budget and max_budget? Also, the docstring doesn't match the actual parameters of the constructor

regarding the installation

Hello
I am trying to install your package, but I am receiving this error. can you please help me?

Could not find a version that satisfies the requirement hpbandster (from versions: )
No matching distribution found for hpbandster

I have latest version of python and pytorch on my environment.

Clarification on algorithm

I'm looking at the example_4_rnn_20_newsgroups example, and it looks like you train N models for 9 epochs, then take the top N/2 models and retrain them from scratch for 27 epochs. Is there an algorithmic reason why you wouldn't want to "hot start" the 27 epoch models, so you'd only have to train them for 18 epochs? Or is this something we should be doing in general?

~ Ben

[1, 0, 1] meaning

Hi, I am struggling to find a meaning of lists of type [x,y,z] ([1,0,1]) in the keys of the optimization (results.json). I understand that this is a unique configuration, but what would be an interpretation? Where could I read about it?

Thanks!

No model based picks with to many workers

I found that I do not get any model based picks when I run with to many workers. For example if I run the optimizer with

min_n_workers = 8

I get model based picks when I run with 8 workers in total, but not when I run with 80 workers. Is this behavior known? Are there any recommendations for choosing the number of workers?

sporadic test failures

Some tests seem to fail from time to time.
Running test suite 100 times in a loop resulted in 29 failures.

File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.04618833879866881 != 0.02237181160117373 within 0.002 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.7197090445659962 != 0.8075606853605798 within 0.05 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.31178329664604787 != 0.322800611255376 within 0.002 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.4535001745647519 != 0.5052498165606699 within 0.05 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.9454515348826193 != 0.9392693580115746 within 0.002 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.9456782206454661 != 0.9395671936311828 within 0.002 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.48232806419410734 != 0.5840429875163532 within 0.05 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.7477113291187305 != 0.6346358437160987 within 0.05 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.8583091576429538 != 0.45815843050124433 within 0.05 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.5670394840846824 != 0.671034866569728 within 0.05 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.6716586627109459 != 0.8018535493921695 within 0.05 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.8673847294651753 != 0.8698048648046033 within 0.002 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.02903496614585073 != 0.01370083621999389 within 0.002 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.9454515348826193 != 0.9392661921656029 within 0.002 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.4818229179852173 != 0.4779503016808494 within 0.002 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.16742765643102592 != 0.04232944931290015 within 0.05 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.04782158321091984 != 0.015909283029143306 within 0.002 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.7681037710087533 != 0.764247084344105 within 0.002 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.7119693664556175 != 0.6480948411529247 within 0.05 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.03594243113107432 != 0.01755869747711552 within 0.002 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.8659473488844843 != 0.7647468119143158 within 0.05 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.8355435118154853 != 0.7670194994489465 within 0.05 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.31178329664604787 != 0.322800611255376 within 0.002 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.6910363012632923 != 0.7735924043200492 within 0.05 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.35864261907085565 != 0.3467105134963595 within 0.002 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.7412952221617459 != 0.8044927743114529 within 0.05 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.7677941199500147 != 0.7645337096977642 within 0.002 delta
--
  File "test_kde.py", line 138, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[d], self.hp_kde_full.bandwidths[d], delta=5e-2)
AssertionError: 0.0653788261754362 != 0.12545497668012273 within 0.05 delta
--
  File "test_kde.py", line 56, in test_bandwidths_estimation
    self.assertAlmostEqual(self.sm_kde.bw[0], self.hp_kde_full.bandwidths[0], delta=2e-3)
AssertionError: 0.5517231143430665 != 0.5542477840534359 within 0.002 delta

Cooperation with UniOpt

Hello. I am also working on a framework for plugging different optimizers. IMHO its architecture is better than the one used in this project. So it may make sense to rewrite your config generators upon my framework (and maybe merge some parts of it (everything containing the word "Spec" in file name) with ConfigSpace).

Local runs without a server

It may be problematic to embed the stuff into a library if it requires a separate process and a port to listen

Workers die randomly

I'm experiencing some strange behavior with workers dying randomly during optimization. Is there some timeout parameter that needs to be modified somewhere?

Link in readme to Arxiv paper

In the readme, please add a mention of and link to the corresponding paper which I think is https://arxiv.org/abs/1807.01774 .

Does this implementation actually combine bayesian and bandit methods? If so, the readme doesn't say so. The readme gives the impression that it's merely a distributed hyberband. The phrase "on steroids" doesn't mean anything specific. Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.