Coder Social home page Coder Social logo

huawei-noah / hebo Goto Github PK

View Code? Open in Web Editor NEW
3.1K 3.1K 562.0 155.02 MB

Bayesian optimisation & Reinforcement Learning library developped by Huawei Noah's Ark Lab

Python 16.18% Makefile 0.01% Jupyter Notebook 83.21% Shell 0.31% Tcl 0.01% C++ 0.02% Dockerfile 0.01% Batchfile 0.01% Jinja 0.27%

hebo's People

Contributors

aivarsoo avatar ajikmr avatar alaya-in-matrix avatar alexmaraval avatar antgro avatar dennissoemers avatar eltociear avatar juliuszziomek avatar kaimo455 avatar kygguo avatar mdasifkhan avatar pauldaoudi avatar paulvstrashnov avatar petarsteinberg avatar sanket-kamthe avatar yanglin-jason avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hebo's Issues

Support for `numpy>=1.25`

Right now it's pinned to numpy<1.25,>=1.16, which is blocking us from upgrading numpy to the latest version 1.26.

abc_release_path.txt

"I want to know what path should be filled in the file named 'abc_release_path.txt'. Initially, I filled in 'abc' as the path after compiling with 'make'. However, I encountered an error."

logging module does not work when using HEBO

image
When I used HEBO package to run my single objective optimization, and used logging module in python to record some details, the log below the log level of ERROR does not work. It means it cannot print on the console as usual.

However, when I did this, it works. Can somebody figure it out?
image

[Quick Fix] Licence in the Setup ?

HI can you add licence info in the setup file ?

Something like

setup( .... , 
     classifiers=[
        'License :: OSI Approved :: MIT License',
     ]
)

This will make licenses flow through PyPi correctly

HEBO class for constrained optimization

Hi HEBO. Through online documentation, the GeneralBO class can be used to achieve constrained BO. But I want to use HEBO class to achieve constrained Bayesian optimization, how to achieve it?

Issues using aig_optimization task in MCBO

Hello,

I have been struggling to use the 'aig_optimization' task that is included in the mcbo package. When I say:

task = task_factory(task_name='aig_optimization')
search_space = task.get_search_space()
x = search_space.sample()
y = task(x)

it throws the error:

File ".../HEBO/MCBO/mcbo/tasks/eda_seq_opt/utils/utils_design_groups.py", line 72, in get_designs_path
for design_id in group:
UnboundLocalError: local variable 'group' referenced before assignment

I tried to correct this by specifying 'designs_group_id':

task = task_factory(task_name='aig_optimization', **{'designs_group_id': 'epfl_arithmetic'})
search_space = task.get_search_space()
x = search_space.sample()
y = task(x)

But then it throws an error saying that \filepath\abc does not exist, so I went to MCBO/mcbo/tasks/eda_seq_opt/abc_release_path.txt and changed the file path. Then, I tried again, and it threw the error:

File ".../.../lib/python3.9/subprocess.py", line 528, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['../libs/EDA/abc', '-c', 'read ./mcbo/tasks/data/epfl_benchmark/arithmetic/hyp.blif; balance; rewrite; refactor; balance; rewrite; rewrite -z; balance; refactor -z; rewrite -z; balance; if -K 6;print_stats; ']' returned non-zero exit status 127.

From here I am really not sure what else to try. Have I done something wrong? Is there a bug in the aig_optimization task?

'float' object cannot be interpreted as an integer

I get the following error TypeError: 'float' object cannot be interpreted as an integer when attempting to search with a XGB model.

space_cfg = [
{'name' : 'max_depth', 'type' : 'int', 'lb' : 1, 'ub' : 10},
{'name' : 'min_child_weight', 'type' : 'int', 'lb' : 1, 'ub' : 100},
{'name': 'n_estimators', 'type' : 'int', 'lb': 1, 'ub': 10000},
{'name': 'alpha', 'type': 'num', 'lb': 0, 'ub': 100},
{'name': 'lambda', 'type': 'num', 'lb': 0, 'ub': 100},
{'name': 'gamma', 'type': 'num', 'lb': 0, 'ub': 100},
{'name': 'eta', 'type': 'pow', 'lb': 1e-5, 'ub': 1},
{'name': 'colsample_bytree', 'type': 'num', 'lb': 1/3, 'ub': 1},
{'name': 'colsample_bylevel', 'type': 'num', 'lb': 1/3, 'ub': 1},
{'name': 'colsample_bynode', 'type': 'num', 'lb': 1/3, 'ub': 1},
{'name': 'subsample', 'type': 'num', 'lb': 1/27, 'ub': 100},
]

Import error


ModuleNotFoundError Traceback (most recent call last)
in
8
9
---> 10 from hebo.optimizers.hebo import HEBO
11 from hebo.optimizers.bo import BO
12

2 frames
/usr/local/lib/python3.7/dist-packages/hebo/acq_optimizers/evolution_optimizer.py in
13 from torch.quasirandom import SobolEngine
14 from pymoo.factory import get_problem, get_mutation, get_crossover, get_algorithm
---> 15 from pymoo.operators.mixed_variable_operator import MixedVariableMutation, MixedVariableCrossover
16 from pymoo.optimize import minimize
17 from pymoo.core.problem import Problem

ModuleNotFoundError: No module named 'pymoo.operators.mixed_variable_operator'

ValueError: The value argument must be within the support

2021-11-01 08:02:44.622 ERROR Traceback (most recent call last):
File "/home/ma-user/work/automl-1.8_EI/vega/core/pipeline/pipeline.py", line 79, in run
pipestep.do()
File "/home/ma-user/work/automl-1.8_EI/vega/core/pipeline/search_pipe_step.py", line 55, in do
self._dispatch_trainer(res)
File "/home/ma-user/work/automl-1.8_EI/vega/core/pipeline/search_pipe_step.py", line 73, in _dispatch_trainer
self.master.run(trainer, evaluator)
File "/home/ma-user/work/automl-1.8_EI/vega/core/scheduler/local_master.py", line 63, in run
self._update(step_name, worker_id)
File "/home/ma-user/work/automl-1.8_EI/vega/core/scheduler/local_master.py", line 71, in _update
self.update_func(step_name, worker_id)
File "/home/ma-user/work/automl-1.8_EI/vega/core/pipeline/generator.py", line 131, in update
self.search_alg.update(record.serialize())
File "/home/ma-user/work/automl-1.8_EI/vega/algorithms/hpo/hpo_base.py", line 84, in update
self.hpo.add_score(config_id, int(rung_id), rewards)
File "/home/ma-user/work/automl-1.8_EI/vega/algorithms/hpo/sha_base/boss.py", line 230, in add_score
self._set_next_ssa()
File "/home/ma-user/work/automl-1.8_EI/vega/algorithms/hpo/sha_base/boss.py", line 159, in _set_next_ssa
configs = self.tuner.propose(self.iter_list[iter])
File "/home/ma-user/work/automl-1.8_EI/vega/algorithms/hpo/sha_base/hebo_adaptor.py", line 70, in propose
suggestions = self.hebo.suggest(n_suggestions=num)
File "/home/ma-user/work/model_zoo/HEBO-master/HEBO/hebo/optimizers/hebo.py", line 126, in suggest
rec = opt.optimize(initial_suggest = best_x, fix_input = fix_input).drop_duplicates()
File "/home/ma-user/work/model_zoo/HEBO-master/HEBO/hebo/acq_optimizers/evolution_optimizer.py", line 122, in optimize
print("optimize: ", prob, algo,self.iter)
File "/home/ma-user/miniconda3/envs/MindSpore-python3.7-aarch64/lib/python3.7/site-packages/pymoo/model/problem.py", line 448, in str
s += "# f(xl): %s\n" % self.evaluate(self.xl)[0]
File "/home/ma-user/miniconda3/envs/MindSpore-python3.7-aarch64/lib/python3.7/site-packages/pymoo/model/problem.py", line 267, in evaluate
out = self._evaluate_batch(X, calc_gradient, out, *args, **kwargs)
File "/home/ma-user/miniconda3/envs/MindSpore-python3.7-aarch64/lib/python3.7/site-packages/pymoo/model/problem.py", line 335, in _evaluate_batch
self._evaluate(X, out, *args, **kwargs)
File "/home/ma-user/work/model_zoo/HEBO-master/HEBO/hebo/acq_optimizers/evolution_optimizer.py", line 50, in _evaluate
acq_eval = self.acq(xcont, xenum).numpy().reshape(num_x, self.acq.num_obj + self.acq.num_constr)
File "/home/ma-user/work/model_zoo/HEBO-master/HEBO/hebo/acquisitions/acq.py", line 39, in call
return self.eval(x, xe)
File "/home/ma-user/work/model_zoo/HEBO-master/HEBO/hebo/acquisitions/acq.py", line 157, in eval
log_phi = dist.log_prob(normed)
File "/home/ma-user/miniconda3/envs/MindSpore-python3.7-aarch64/lib/python3.7/site-packages/torch/distributions/normal.py", line 73, in log_prob
self._validate_sample(value)
File "/home/ma-user/miniconda3/envs/MindSpore-python3.7-aarch64/lib/python3.7/site-packages/torch/distributions/distribution.py", line 277, in _validate_sample
raise ValueError('The value argument must be within the support')
ValueError: The value argument must be within the support

ValueError in HEBO

HEBO/hebo/models/gp/gp.py: Line 154
pred = torch.distributions.normal.Normal(torch.zeros(len(Xc)), torch.eye(len(Xc)))
should be written as,
pred = torch.distributions.MultivariateNormal(torch.zeros(len(Xc)), torch.eye(len(Xc))).sample()

[BUG] Demo example fails to run

Hi,

I pull the repo and install via

cd HEBO
pip install -e .

then I run the Demo example, but it fails to execute with the following error

Traceback (most recent call last):
  File "/HEBO/HEBO/benchmarks/default-bench.py", line 17, in <module>
    opt.observe(rec, obj(rec))
  File "/HEBO/HEBO/hebo/optimizers/hebo.py", line 206, in observe
    self.X   = self.X.append(XX, ignore_index = True)
  File "/HEBO/HEBO/venv/lib/python3.9/site-packages/pandas/core/generic.py", line 5989, in __getattr__
    return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'append'

My guess is there is some data conversion wrong. Could anyone fix that?

Thanks!

Comparison of HEBO models

Hi! HEBO seems to support a great variety of models but in your paper you only mention Gaussian Processes. Is this the recommended type of model to use? Is there a comparison somewhere?

Issues with Cholesky decompositions on simple benchmark

I get some issues with the Cholesky Decompositions. Here's how to reproduce it:

  1. Go to: https://colab.research.google.com/drive/1XftMKU7-tWj0cdWjH7XsfiDPBIKXAWZk#scrollTo=OuypIJ7do1qi
  2. Run all cells.
  3. Then, either we get the exception:
NanError: cholesky_cpu: 3716 of 3721 elements of the torch.Size([61, 61]) tensor are NaN.

or

NotPSDError: Matrix not positive definite after repeatedly adding jitter up to 1.0e-04.

What matrix is being decomposed here? Can I influence or change this matrix using some hyperparameters? What can I do?

Thank you!
exc

asap7.lib not Found

Hello,
When I run the command:
python ./core/algos/bo/boils/main_multi_boils.py --designs_group_id log2 --n_parallel $n_parallel 1 --seq_length 20 --mapping fpga --action_space_id extended --ref_abc_seq resyn2 --n_total_evals 200 --n_initial 20 --device 0 --lut_inputs 4 --use_yosys 1 --standardise --ard --acq ei --kernel_type ssk --length_init_discrete_factor .666 --failtol 40 --objective lut --seed 0
It shows 'asap7.lib' not found. Can you share it?
Thank you.

hebo take up too much cpus

The main process of HEBO take up 1600% cpus, which makes other parts of the program run very slowly. Is there any way to reduce the cpu usage of hebo? Thanks!

mixed_variable_operator was removed from pymoo.operators in v0.6.0.

Since release v0.6.0 of pymoo yesterday (https://github.com/anyoptimization/pymoo/releases/tag/0.6.0), HEBO's acq_optimizers fail to import because of an ImportError during the module's own imports:

  tests/unittests/algo/long/hebo/test_hebo.py:9: in <module>
      from hebo.models.model_factory import model_dict
  .tox/algo/lib/python3.7/site-packages/hebo/__init__.py:10: in <module>
      from . import acq_optimizers
  .tox/algo/lib/python3.7/site-packages/hebo/acq_optimizers/__init__.py:10: in <module>
      from . import evolution_optimizer
  .tox/algo/lib/python3.7/site-packages/hebo/acq_optimizers/evolution_optimizer.py:15: in <module>
      from pymoo.operators.mixed_variable_operator import MixedVariableMutation, MixedVariableCrossover

CUDA out of memory for BOiLS

X1_full = torch.repeat_interleave(X1.unsqueeze(0), len(indicies), dim=0)[ RuntimeError: CUDA out of memory. Tried to allocate 3.89 GiB (GPU 0; 11.93 GiB total capacity; 3.49 GiB already allocated; 3.65 GiB free; 7.67 GiB reserved in total by PyTorch)

BOiLS takes up too many GPU memories. Do you have any suggestions about it?

Issue with hebo/acq_optimizers/evolution_optimizer.py

When the eval() function in the acquisition function class involves gradient backward, because line 102 of hebo/acq_optimizers/evolution_optimizer.py is ''with torch.no_grad():'', the gradient cannot be passed back. delete this line solves the problem. Why add this line ? Will deleting this line affect the final output result ?

background: I am using the genetic algrithm to find good hyperparameters for neural network training using pytorch.

The inverse_transform does not work as expected.

Original Codes:

def inverse_transform(self, x):
return (self.base ** x).astype(int)

As I understand here the function inverse_transform() should worked as taking a arbiraty input(s) as exponent, however we cannot guaranteed the input(s) as Int dtype. In such case, if we have codes as below:

def suggest(self, n_suggestions = None, fix_input : dict = None):
self.pop = self.algo.ask()
pop_x = torch.from_numpy(self.pop.get('X').astype(float)).float()
x = pop_x[:, :self.space.num_numeric]
xe = pop_x[:, self.space.num_numeric:].round().long()
rec = self.space.inverse_transform(x, xe)
if fix_input is not None:
for k, v in fix_input.items():
rec[k] = v
x, xe = self.space.transform(rec)
x_cat = torch.cat([x, xe.float()], dim = 1).numpy()
self.pop.set('X', x_cat)
return rec

Here x could be arbitary number, assuming we have x=np.array([5.4]), write some toy codes below to demostrate the problem:

from hebo.design_space.int_exponent_param import IntExponentPara
p = IntExponentPara({'name': 'p', 'base': 2, 'lb': 32, 'ub': 256})  # expected parameter values [32, 64, 128, 256]
x = np.array([5.4])
print(p.inverse_transform(x))
# >>> [42]
# 2**5.4=42.22, after astype(int) is 42, but not in valid values [32, 64, 128, 256]

That is to say, what optimzer.evolution.suggest() produces suggested parameter(s) could violate its definition. To resovled this issue, I open a pull request.
#33

help on optimization restart

sorry, I know this problem belongs to GPy, but I'm new to GP and I can't really solve the problem by myself.
I get following error

Training is completed. Best valid loss:2.801e-01
Warning - optimization restart 2/10 failed
Warning - optimization restart 3/10 failed
Warning - optimization restart 4/10 failed
Warning - optimization restart 5/10 failed
Warning - optimization restart 6/10 failed
Warning - optimization restart 7/10 failed
Warning - optimization restart 8/10 failed
Warning - optimization restart 9/10 failed
Warning - optimization restart 10/10 failed

My search space is following:
hps = []
hps.append({'name': 'num_layers', 'type': 'int', 'lb': 3, 'ub': 20})
hps.append({'name': 'num_nodes', 'type': 'int', 'lb': 8, 'ub': 1024})
hps.append({'name': 'learning_rate', 'type': 'pow', 'lb': 1e-4, 'ub': 1e-1, 'base': 10})

basically searching for best MLP width, layer and learning rate.
Any possible solution?

thank you in advance

Update PyPI package to version 0.3.5

The newest version of HEBO fixes a lot of dependencies, for instance with pymoo and pandas as mentioned in #26 and #42. Currently running pip install git+https://github.com/huawei-noah/HEBO.git#subdirectory=HEBO fixes this issue however, I believe it would be a better solution to update the PyPI package from 0.3.2 to 0.3.5

Superfluous LightGBM Dependency Apple Silicon Incompatible

This package requires lightgbm - which I can find no instance of being used in the code - however, this breaks the install on Apple silicon, e.g. there are no simple ways to install LightGBM on Mac M1/M2/M3.

If this unneeded dependency can be removed, I can simply pip install HEBO in my code once more!

Installation issue for Agent on java branch

@AntGro i tried installing Agent from the java branch. But came into this error:

$ conda create -n agent --file conda-linux-64.lock
Retrieving notices: ...working... done

CondaFileIOError: 'conda-linux-64.lock'. [Errno 2] No such file or directory: 'conda-linux-64.lock'

switching to master resolved.

HEBO demo codes fail

Hi,

I installed the HEBO library, and tried to run the demo codes here:

  1. https://github.com/huawei-noah/HEBO/tree/master/HEBO#demo
  2. https://github.com/huawei-noah/HEBO/tree/master/HEBO#auto-tuning-via-sklearn-estimator

Both begin executing but eventually fail with the error:
TypeError: __init__() got an unexpected keyword argument 'prob_per_variable'

The complete stacktrace for the second demo code (auto-tuning sklearn estimator) is:

Iter 0, best metric: 0.398791  
Iter 1, best metric: 0.492467
Iter 2, best metric: 0.658477
Iter 3, best metric: 0.658477
Iter 4, best metric: 0.658477
Iter 5, best metric: 0.658477
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/redacted_path_1/HEBO-master/HEBO/hebo/sklearn_tuner.py", line 74, in sklearn_tuner
    rec     = opt.suggest()
  File "/redacted_path_1/HEBO-master/HEBO/hebo/optimizers/hebo.py", line 153, in suggest
    rec = opt.optimize(initial_suggest = best_x, fix_input = fix_input).drop_duplicates()
  File "/redacted_path_1/HEBO-master/HEBO/hebo/acq_optimizers/evolution_optimizer.py", line 126, in optimize
    algo      = get_algorithm(self.es, pop_size = self.pop, sampling = init_pop, mutation = mutation, crossover = crossover, repair = self.repair)
  File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/factory.py", line 85, in get_algorithm
    return get_from_list(get_algorithm_options(), name, args, {**d, **kwargs})
  File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/factory.py", line 49, in get_algorithm_options
    from pymoo.algorithms.moo.ctaea import CTAEA
  File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/algorithms/moo/ctaea.py", line 223, in <module>
    class CTAEA(GeneticAlgorithm):
  File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/algorithms/moo/ctaea.py", line 230, in CTAEA
    mutation=PM(eta=20, prob_per_variable=None),
  File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/operators/mutation/pm.py", line 77, in __init__
    super().__init__(prob=prob, **kwargs)
  File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/core/mutation.py", line 29, in __init__
    super().__init__(**kwargs)
  File "/redacted_path_2/anaconda3/lib/python3.7/site-packages/pymoo-0.6.0.dev0-py3.7-linux-x86_64.egg/pymoo/core/mutation.py", line 10, in __init__
    super().__init__(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'prob_per_variable'

Can you provide pointers for fixing this? Thanks!

Issues running the test_nap_hpo.py

a. Please see a screenshot of the changes where I modified the output of the get_hpo_specs, the saved_model_path, the T value in env_spec. I also changed the “param_iter” to 1999 as I finished the training with 2000 iterations and I see the file of weight_1999 in the directory.

image

I saw an error thrown by checking below (7200%295!=0), and I changed the T to make it pass:
assert self.params["max_steps"] % gym.spec(self.params['env_id']).max_episode_steps == 0
b. Errors are thrown for self.dispatch_seeds[0] out of index is fixed by:

image

Typo in about: Developped should be Developed

In the repository About section:
Bayesian optimisation & Reinforcement Learning library developped by Huawei Noah's Ark Lab
should be
Bayesian optimisation & Reinforcement Learning library developed by Huawei Noah's Ark Lab

Typo in hebo/optimizers/hebo.py: Incorrect Parameter axsi instead of axis

Description

There is a typo in the hebo/optimizers/hebo.py file on line 172 where the parameter axsi is used instead of axis.

Location

File : hebo/optimizers/hebo.py
Line : 172

Current Code

if rec.shape[0] < n_suggestions:
    rand_rec = self.quasi_sample(n_suggestions - rec.shape[0], fix_input)
    rec = pd.concat([rec, rand_rec], axsi=0, ignore_index=True)

Issue

The parameter should be axis, not axsi.

Suggested Fix

Replace axsi with axis:

if rec.shape[0] < n_suggestions:
    rand_rec = self.quasi_sample(n_suggestions - rec.shape[0], fix_input)
    rec = pd.concat([rec, rand_rec], axis=0, ignore_index=True)

Impact

This typo may cause runtime errors when this code path is executed, as pd.concat does not recognize axsi as a valid parameter.

An unexpected bug

When I using hebo.sklearn_tuner to optimize the hyperparameters of xgboost, there is an error that "TypeError: 'float' object cannot be interpreted as an integer", so I made a breakpoint to enter the function to find out the reason. It's because the dataframe class method iloc will transform the data type from int to float, which will cause the error.

mcbo test_aig_task.py error

When I try to run the test_aig_task.py file inside the test folder of mcbo, I first encountered a group error. I looked up the corresponding groups and selected one, naming the designs_group_id in task_kwargs as 'open_abc_orig'. Are there any other options or alternative solutions for this?

Secondly, when I ran it again, I encountered a TypeError: init() missing 3 required positional arguments: 'obj_dims', 'out_constr_dims', and 'out_upper_constr_vals'. Currently, I'm trying to solve this by providing values for these three arguments. Is this a reasonable solution? If so, how do I know what values are most appropriate to assign?

Lastly, I encountered a PermissionError: [Errno 13] Permission denied: '/root/autodl-tmp/MCBO2/abc'. What should be the path filled in abc_release_path.txt?

Problem with pest synthetic problem

I have been trying to use the pest synthetic problem in the MCBO package. If I try to run

from HEBO.MCBO.mcbo import task_factory
from HEBO.MCBO.mcbo.optimizers.bo_builder import BO_ALGOS

task = task_factory(task_name="pest")
search_space = task.get_search_space()

optimizer = BO_ALGOS['CoCaBO'].build_bo(search_space=search_space, n_init=10)

for i in range(100):
    x = optimizer.suggest(1)
    y = task(x)
    optimizer.observe(x, y)
    print(f'Iteration {i + 1:3d}/{100:3d} - f(x) = {y[0, 0]:.3f} - f(x*) = {optimizer.best_y:.3f}')

then it leads to an assertion error on line 101 in pest.py: assert x.ndim == 1 and len(x) == self._n_stages

I can get around the error by manually setting the number of stages by saying task._n_stages=24. I noticed on line 108 of pest.py it says for i in range(1, n_stages): which might have something to do with it.

Am I doing something wrong or is there potentially a bug in the pest problem?

applying FIFO in Concurrent trials through Ray Tune

the max_concurrent feature is extremely helpful to speed up the optimization process though parallelization. However, the problem is that to start the next set of trials, it has to wait until ALL the concurrent trials (defined by max_concurrent) finishes. which is counter intuitive in my use case as some combination of hyperparameters takes too long to finish compared to other combinations. which leaves most of the CPUs Idle most of the time due to one trial taking too long. does HEBO need to have all concurrent running trials finish to suggest the next combination or is there a way to make it suggest the next combination once any of the concurrent running trails finish so no CPUs stays Idle until all of them finish?
btw, I'm using Ray Tune to deploy HEBO.

can only concatenate str (not "NoneType") to str

Dear authors,

when i run the command "./weighted_retraining/scripts/robust_opt/robust_opt_chem.sh", i got the following error, do you know how to fix it? thanks

python ./weighted_retraining/weighted_retraining/robust_opt_scripts/robust_opt_chem.py --seed=3 --gpu --query_budget=500 --retraining_frequency=50 --pretrained_model_file=./weighted_retraining/assets/pretrained_models/chem_vanilla/chem.ckpt --pretrained_model_id vanilla --batch_size 128 --lso_strategy=opt --train_path=weighted_retraining/data/chem/zinc/orig_model/tensors_train --val_path=weighted_retraining/data/chem/zinc/orig_model/tensors_val --vocab_file=weighted_retraining/data/chem/zinc/orig_model/vocab.txt --property_file=weighted_retraining/data/chem/zinc/orig_model/pen_logP_all.pkl --n_retrain_epochs=0.1 --latent_dim 56 --beta_target_pred_loss 10 --target_predictor_hdims [128,128] --metric_loss triplet --metric_loss_kw {'threshold':.1} --beta_metric_loss 1 --beta_final 0.001 --n_init_retrain_epochs=1 --n_best_points=2000 --n_rand_points=8000 --n_inducing_points=500 --samples_per_model 0 --weight_type=rank --rank_weight_k=1e-3 --acq-func-id ExpectedImprovement --acq-func-kwargs {} --acq-func-opt-kwargs {'batch_limit':25} --use_pretrained
r=50 k=1e-3 seed=4
Traceback (most recent call last):
File "./weighted_retraining/weighted_retraining/robust_opt_scripts/robust_opt_chem.py", line 1115, in
main()
File "./weighted_retraining/weighted_retraining/robust_opt_scripts/robust_opt_chem.py", line 525, in main
acq_func_kwargs=args.acq_func_kwargs,
File "./weighted_retraining/weighted_retraining/robust_opt_scripts/robust_opt_chem.py", line 351, in get_path
pretrained_model_id=pretrained_model_id
File "./weighted_retraining/weighted_retraining/robust_opt_scripts/robust_opt_chem.py", line 250, in get_root_path
exp_spec += '-' + METRIC_LOSSES[metric_loss]'exp_metric_id'
TypeError: can only concatenate str (not "NoneType") to str

T-LBO conda environment PackageNotFound errors (Windows)

(base) C:\Users\sterg>git clone https://github.com/huawei-noah/HEBO.git
Cloning into 'HEBO'...
remote: Enumerating objects: 845, done.
remote: Counting objects: 100% (27/27), done.
remote: Compressing objects: 100% (17/17), done.
remote: Total 845 (delta 13), reused 10 (delta 10), pack-reused 818
Receiving objects: 100% (845/845), 10.13 MiB | 3.98 MiB/s, done.
Resolving deltas: 100% (317/317), done.

(base) C:\Users\sterg>cd HEBO

(base) C:\Users\sterg\HEBO>conda env create -f lsbo_metric_env.yml

EnvironmentFileNotFound: 'C:\Users\sterg\HEBO\lsbo_metric_env.yml' file not found


(base) C:\Users\sterg\HEBO>cd T-LBO

(base) C:\Users\sterg\HEBO\T-LBO>conda env create -f lsbo_metric_env.yml
Collecting package metadata (repodata.json): done
Solving environment: failed

ResolvePackageNotFound:
  - certifi==2020.12.5=py37h06a4308_0
  - libpng==1.6.37=hbc83047_0
  - tensorflow-gpu==2.2.0=h0d30ee6_0
  - libgfortran-ng==7.3.0=hdf63c60_0
  - numpy-base==1.19.1=py37hfa32c7d_0
  - expat==2.2.10=he6710b0_2
  - tensorflow==2.2.0=gpu_py37h1a511ff_0
  - setuptools==50.3.0=py37hb0f4dca_1
  - sqlite==3.33.0=h62c20be_0
  - libprotobuf==3.13.0.1=hd408876_0
  - lcms2==2.11=h396b838_0
  - ninja==1.10.1=py37hfd86e86_0
  - pandas==1.1.3=py37he6710b0_0
  - bzip2==1.0.8=h7b6447c_0
  - wrapt==1.12.1=py37h7b6447c_1
  - yaml==0.2.5=h7b6447c_0
  - grpcio==1.31.0=py37hf8bcb03_0
  - pyyaml==5.3.1=py37h7b6447c_1
  - cairo==1.14.12=h8948797_3
  - fontconfig==2.13.0=h9420a91_0
  - cudatoolkit==10.1.243=h6bb024c_0
  - multidict==4.7.6=py37h7b6447c_1
  - sip==4.19.8=py37hf484d3e_0
  - mkl_fft==1.2.0=py37h23d657b_0
  - zstd==1.4.5=h9ceee32_0
  - kiwisolver==1.2.0=py37hfd86e86_0
  - pillow==8.0.0=py37h9a89aac_0
  - c-ares==1.16.1=h7b6447c_0
  - pytorch==1.7.1=py3.7_cuda10.1.243_cudnn7.6.3_0
  - mkl_random==1.1.1=py37h0573a6f_0
  - python==3.7.9=h7579374_0
  - libxml2==2.9.10=he19cac6_1
  - protobuf==3.13.0.1=py37he6710b0_1
  - tensorflow-base==2.2.0=gpu_py37h8a81be8_0
  - openssl==1.1.1k=h27cfd23_0
  - xz==5.2.5=h7b6447c_0
  - tornado==6.0.4=py37h7b6447c_1
  - rdkit==2020.03.3.0=py37hc20afe1_1
  - regex==2020.10.15=py37h7b6447c_0
  - libuuid==1.0.3=h1bed415_2
  - lz4-c==1.9.2=heb0550a_3
  - libxcb==1.14=h7b6447c_0
  - libboost==1.67.0=h46d08c1_4
  - brotlipy==0.7.0=py37h7b6447c_1000
  - cupti==10.1.168=0
  - mkl-service==2.3.0=py37he904b0f_0
  - dbus==1.13.18=hb2f20db_0
  - scikit-learn==0.23.2=py37h0573a6f_0
  - freetype==2.10.3=h5ab3b9f_0
  - libedit==3.1.20191231=h14c3975_1
  - libtiff==4.1.0=h2733197_1
  - pyqt==5.9.2=py37h05f1152_2
  - matplotlib-base==3.3.1=py37h817c723_0
  - ca-certificates==2021.4.13=h06a4308_1
  - ncurses==6.2=he6710b0_1
  - libstdcxx-ng==9.1.0=hdf63c60_0
  - libgcc-ng==9.1.0=hdf63c60_0
  - zlib==1.2.11=h7b6447c_3
  - glib==2.66.1=h92f7085_0
  - hdf5==1.10.6=hb1b8bf9_0
  - cffi==1.14.3=py37he30daa8_0
  - qt==5.9.7=h5867ecd_1
  - icu==58.2=he6710b0_3
  - gstreamer==1.14.0=hb31296c_0
  - h5py==2.10.0=py37hd6299e0_1
  - tk==8.6.10=hbc83047_0
  - jpeg==9b=h024ee3a_2
  - pixman==0.40.0=h7b6447c_0
  - cryptography==3.1.1=py37h1ba5d50_0
  - libffi==3.3=he6710b0_2
  - ld_impl_linux-64==2.33.1=h53a641e_7
  - libuv==1.40.0=h7b6447c_0
  - readline==8.0=h7b6447c_0
  - py-boost==1.67.0=py37h04863e7_4
  - numpy==1.19.1=py37hbc911f0_0
  - pcre==8.44=he6710b0_0
  - gst-plugins-base==1.14.0=hbbd80ab_1
  - aiohttp==3.6.3=py37h7b6447c_0

Independent features for hebo: "EmbeddingAlignmentCells(EACs)"

Impressive repo:) Thanks to all the contributors!!

I notice that EACs only support tree-structure features. But, if i have some features independent to the stages, the property "EACs.subspaces" will ignore the independent features. Maybe some modifications to these independent features should be done in the object EACs.

Looking forward to your response.

Support for fixed parameters

Is it possible to have HEBO search the optimum over a design space where a parameter is defined in a regular fashion (e.g., as a real or integer), but also temporarily (i.e., during a single optimization session) to constrain the search space to a single value of that parameter, and to search only the subspace defined by the remaining unconstrained parameters?

I.e., something similar to Optuna's PartialFixedSampler

Why this is useful

Context

Generally, it is useful to be able to run several optimization sessions, and use results of trials from previous sessions to improve convergence in the current session. As far as I can understand from the docs, using HEBO, one can run a bunch of trials in one or more sessions and then use observe api calls to feed the previous results to the bayesian model before further exploring using suggest in the current session.

Scenario no. 1 - Accelerate optimization by reducing the search space

Now, as explained in the Optuna docs linked above and in the related issue, after running a bunch of trials, one might decide upon a particular value of a parameter, and want to prevent the optimizer from investing more time on exploring values of that parameter are clear not to yield better results, but taking into account the results obtained with other values of that parameter (along the dimensions of other unbound params, the trained cost predictor might still provide valuable insights).

Scenario no. 2 - Transfer Learning

If this is implemented, one could do transfer learning, in a fashion similar to the one proposed, e.g., by SMAC3 (they call this feature 'Optimization across Instances') or OpenBox, i.e., to reuse knowledge about high yield sub-regions from one instance (dataset/task) to another.

Potential Solution

I'm thinking that one could simply change the bounds of a particular parameter every time a new optimization session is started. However, I'm not quite sure that the observe call will accept values that are out of bounds, and, even if it doesn't crash, that the underlying bayesian model is trained correctly.

pymoo has no module 'algorithms.so_genetic_algorithm'

When running the example code for sklearn tuner, I get the following error message

ModuleNotFoundError: No module named 'pymoo.algorithms.so_genetic_algorithm'

The line of code in question is from pymoo.algorithms.so_genetic_algorithm import GA within evolution_optimizer.py

I can reproduce it standalone as well (i.e. installing pymoo and trying to run that import), and I don't see this algorithm in pymoo's API reference.

Looks like this comes from a breaking change in pymoo 0.5.0, which they detailed "The package structure has been modified to distinguish between single- and multi-objective optimization more clearly."

Based on their updated API for version 0.5.0, I believe but am not sure that the genetic algorithm import needs to be changed to:

from pymoo.algorithms.soo.nonconvex.ga import GA

sklearn_tuner.py example failing with pymoo version 0.5.0

Using the latest version of pymoo==0.5.0 the sklearn_tuner.py example fails with the error:

ModuleNotFoundError: No module named 'pymoo.algorithms.so_genetic_algorithm'

It would be great if we could either make the example compatible with pymoo==0.5.0 or update the requirements and install instructions to specify pymoo==0.4.2 as the required version.

The example runs with pymoo==0.4.2 albeit breaks with a different error after 5 iterations:

/Users/~/opt/miniconda3/envs/hebo/bin/python /Users/ryan_rhys/ml_physics/HEBO/HEBO/hebo/sklearn_tuner.py
Iter 0, best metric: 0.389011
Iter 1, best metric: 0.496764
Iter 2, best metric: 0.649803
Iter 3, best metric: 0.649803
Iter 4, best metric: 0.649803
Iter 5, best metric: 0.649803
Traceback (most recent call last):
  File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/hebo/acq_optimizers/evolution_optimizer.py", line 119, in optimize
    res   = minimize(prob, algo, ('n_gen', self.iter), verbose = self.verbose)
  File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/optimize.py", line 85, in minimize
    res = algorithm.solve()
  File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/algorithm.py", line 226, in solve
    self._solve(self.problem)
  File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/algorithm.py", line 321, in _solve
    self.next()
  File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/algorithm.py", line 246, in next
    self._next()
  File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/algorithms/genetic_algorithm.py", line 93, in _next
    self.off = self.mating.do(self.problem, self.pop, self.n_offsprings, algorithm=self)
  File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/infill.py", line 40, in do
    _off = self.eliminate_duplicates.do(_off, pop, off)
  File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/duplicate.py", line 26, in do
    pop = pop[~self._do(pop, None, np.full(len(pop), False))]
  File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/duplicate.py", line 75, in _do
    D = self.calc_dist(pop, other)
  File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/model/duplicate.py", line 66, in calc_dist
    D = cdist(X, X)
  File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/pymoo/util/misc.py", line 90, in cdist
    return scipy.spatial.distance.cdist(A, B, **kwargs)
  File "/Users/~/opt/miniconda3/envs/hebo/lib/python3.8/site-packages/scipy/spatial/distance.py", line 2954, in cdist
    return cdist_fn(XA, XB, out=out, **kwargs)
ValueError: Unsupported dtype object

Support for conditional/hierarchical design spaces

Can one specify a conditional/hierarchical search space to HEBO? I.e., something similar to SMAC3?

E.g., only sample the number of convoutional filters in the second layer of a neural net only if we decide (by another parameter )to have a network with at least two layers.

I'm thinking that this can be somewhat circumvented by returning a high cost value for infeasible combinations, but I imagine this might be a suboptimal approach as it might affect optimization performance since, e.g., some optima might lie on the edge of feasible regions, and the cost estimator, if it has some sort of smoothness prior (which they usually do), has a chance of assigning unfaithfully high cost values near the infeasible configurations - at least initially (i.e., the a priori probability of a kink in the error surface is generally lower).

Some optimizers are able to address this by training a different model - a feasability predictor (which has the advantage of being able to work with unknown feasibility constraints).

So how should one deal with this in HEBO?

ValueError: NaN in distribution

Hi, thanks for this repository! So far it works quite well, but now I suddenly encountered a weird error after 11 optimization steps of non-batched HEBO:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/tmp/ipykernel_2773121/4102601230.py in <module>
     35 
     36 for i in range(opt_steps):
---> 37     rec = opt.suggest()
     38     if "bs" in rec:
     39         rec["bs"] = 2 ** rec["bs"]

~/.local/lib/python3.8/site-packages/hebo/optimizers/hebo.py in suggest(self, n_suggestions, fix_input)
    151             sig = Sigma(model, linear_a = -1.)
    152             opt = EvolutionOpt(self.space, acq, pop = 100, iters = 100, verbose = False, es=self.es)
--> 153             rec = opt.optimize(initial_suggest = best_x, fix_input = fix_input).drop_duplicates()
    154             rec = rec[self.check_unique(rec)]
    155 

~/.local/lib/python3.8/site-packages/hebo/acq_optimizers/evolution_optimizer.py in optimize(self, initial_suggest, fix_input, return_pop)
    125         crossover = self.get_crossover()
    126         algo      = get_algorithm(self.es, pop_size = self.pop, sampling = init_pop, mutation = mutation, crossover = crossover, repair = self.repair)
--> 127         res       = minimize(prob, algo, ('n_gen', self.iter), verbose = self.verbose)
    128         if res.X is not None and not return_pop:
    129             opt_x = res.X.reshape(-1, len(lb)).astype(float)

~/.local/lib/python3.8/site-packages/pymoo/optimize.py in minimize(problem, algorithm, termination, copy_algorithm, copy_termination, **kwargs)
     81 
     82     # actually execute the algorithm
---> 83     res = algorithm.run()
     84 
     85     # store the deep copied algorithm in the result object

~/.local/lib/python3.8/site-packages/pymoo/core/algorithm.py in run(self)
    211         # while termination criterion not fulfilled
    212         while self.has_next():
--> 213             self.next()
    214 
    215         # create the result object to be returned

~/.local/lib/python3.8/site-packages/pymoo/core/algorithm.py in next(self)
    231         # call the advance with them after evaluation
    232         if infills is not None:
--> 233             self.evaluator.eval(self.problem, infills, algorithm=self)
    234             self.advance(infills=infills)
    235 

~/.local/lib/python3.8/site-packages/pymoo/core/evaluator.py in eval(self, problem, pop, skip_already_evaluated, evaluate_values_of, count_evals, **kwargs)
     93         # actually evaluate all solutions using the function that can be overwritten
     94         if len(I) > 0:
---> 95             self._eval(problem, pop[I], evaluate_values_of=evaluate_values_of, **kwargs)
     96 
     97             # set the feasibility attribute if cv exists

~/.local/lib/python3.8/site-packages/pymoo/core/evaluator.py in _eval(self, problem, pop, evaluate_values_of, **kwargs)
    110         evaluate_values_of = self.evaluate_values_of if evaluate_values_of is None else evaluate_values_of
    111 
--> 112         out = problem.evaluate(pop.get("X"),
    113                                return_values_of=evaluate_values_of,
    114                                return_as_dictionary=True,

~/.local/lib/python3.8/site-packages/pymoo/core/problem.py in evaluate(self, X, return_values_of, return_as_dictionary, *args, **kwargs)
    122 
    123         # do the actual evaluation for the given problem - calls in _evaluate method internally
--> 124         self.do(X, out, *args, **kwargs)
    125 
    126         # make sure the array is 2d before doing the shape check

~/.local/lib/python3.8/site-packages/pymoo/core/problem.py in do(self, X, out, *args, **kwargs)
    160 
    161     def do(self, X, out, *args, **kwargs):
--> 162         self._evaluate(X, out, *args, **kwargs)
    163         out_to_2d_ndarray(out)
    164 

~/.local/lib/python3.8/site-packages/hebo/acq_optimizers/evolution_optimizer.py in _evaluate(self, x, out, *args, **kwargs)
     46 
     47         with torch.no_grad():
---> 48             acq_eval = self.acq(xcont, xenum).numpy().reshape(num_x, self.acq.num_obj + self.acq.num_constr)
     49             out['F'] = acq_eval[:, :self.acq.num_obj]
     50 

~/.local/lib/python3.8/site-packages/hebo/acquisitions/acq.py in __call__(self, x, xe)
     37 
     38     def __call__(self, x : Tensor,  xe : Tensor):
---> 39         return self.eval(x, xe)
     40 
     41 class SingleObjectiveAcq(Acquisition):

~/.local/lib/python3.8/site-packages/hebo/acquisitions/acq.py in eval(self, x, xe)
    155             normed    = ((self.tau - self.eps - py - noise * torch.randn(py.shape)) / ps)
    156             dist      = Normal(0., 1.)
--> 157             log_phi   = dist.log_prob(normed)
    158             Phi       = dist.cdf(normed)
    159             PI        = Phi

~/.local/lib/python3.8/site-packages/torch/distributions/normal.py in log_prob(self, value)
     71     def log_prob(self, value):
     72         if self._validate_args:
---> 73             self._validate_sample(value)
     74         # compute the variance
     75         var = (self.scale ** 2)

~/.local/lib/python3.8/site-packages/torch/distributions/distribution.py in _validate_sample(self, value)
    286         valid = support.check(value)
    287         if not valid.all():
--> 288             raise ValueError(
    289                 "Expected value argument "
    290                 f"({type(value).__name__} of shape {tuple(value.shape)}) "

ValueError: Expected value argument (Tensor of shape (100, 1)) to be within the support (Real()) of the distribution Normal(loc: 0.0, scale: 1.0), but found invalid values:
tensor([[ -1.1836],
        [ -1.2862],
        [-11.6360],
        [-11.3412],
        [  0.3811],
        [ -2.0235],
        [ -1.7288],
        [ -8.3472],
        [-10.1714],
        [ -2.6084],
        [ -0.8098],
        [ -0.9687],
        [ -9.0626],
        [ -2.2273],
        [ -9.0942],
        [ -1.6956],
        [ -6.6197],
        [ -9.3882],
        [ -6.1594],
        [ -9.2895],
        [ -1.7074],
        [  0.8382],
        [-14.6693],
        [ -0.8303],
        [-10.2741],
        [  0.2808],
        [ -9.3681],
        [ -0.6729],
        [ -2.0288],
        [ -1.4389],
        [ -7.1975],
        [-11.5732],
        [-10.2751],
        [ -1.3800],
        [ -1.9773],
        [ -1.4668],
        [ -9.7166],
        [ -8.3093],
        [-15.5914],
        [ -0.0808],
        [  0.3732],
        [-16.2714],
        [ -2.3120],
        [ -8.7503],
        [ -1.6276],
        [     nan],
        [-15.3692],
        [ -9.1615],
        [ -9.8093],
        [ -2.0716],
        [ -1.9259],
        [  0.9543],
        [ -8.1521],
        [ -2.5709],
        [ -1.6153],
        [-10.7236],
        [ -0.0763],
        [  0.0543],
        [ -7.2755],
        [-10.6411],
        [ -7.9253],
        [-19.4996],
        [ -2.0001],
        [-11.7616],
        [-11.0187],
        [-12.0727],
        [ -1.3243],
        [-11.2528],
        [ -1.5527],
        [ -0.9219],
        [ -1.0130],
        [-10.1825],
        [-18.3420],
        [-11.1005],
        [ -8.5818],
        [-11.1588],
        [ -8.8115],
        [ -1.0410],
        [-15.2722],
        [ -1.8399],
        [ -1.0827],
        [ -1.0277],
        [ -6.4768],
        [ -8.3902],
        [ -0.9513],
        [ -1.3429],
        [ -1.0889],
        [ -7.2952],
        [ -7.8548],
        [ -0.0231],
        [ -7.1898],
        [-20.4194],
        [ -1.2503],
        [-19.6157],
        [ -0.3398],
        [-15.7221],
        [-10.3210],
        [ -9.5764],
        [ -0.2335],
        [ -0.3788]])

Seems like there is a NaN in some distribution of HEBO. But my input parameters (opt.X) and losses (opt.y) are never NaN.
This is the design space I'm using:

space = DesignSpace().parse([{'name': 'lr', 'type' : 'num', 'lb' : 0.00005, 'ub' : 0.1},
                                 {'name': 'n_estimators', 'type' : 'int', 'lb' : 1, 'ub' : 20},  # multiplied by 10
                                 {'name': 'max_depth', 'type' : 'int', 'lb' : 1, 'ub' : 10},
                                 {'name': 'subsample', 'type' : 'num', 'lb' : 0.5, 'ub' : 0.99},
                                 {'name': 'colsample_bytree', 'type' : 'num', 'lb' : 0.5, 'ub' : 0.99},
                                 {'name': 'gamma', 'type' : 'num', 'lb' : 0.01, 'ub' : 10.0},
                                 {'name': 'min_child_weight', 'type' : 'int', 'lb' : 1, 'ub' : 10},
                                 
                                 {'name': 'fill_type', 'type' : 'cat', 'categories' : ['median', 'pat_median','pat_ema']},
                                 {'name': 'flat_block_size', 'type' : 'int', 'lb' : 1, 'ub' : 1}
                                ])
    
opt = HEBO(space)

I already commented out flat_block_size as I thought that maybe it is a problem if lb == ub, but it still crashes.

Any ideas on how I can debug this?

期待AIRBO的更新

首先非常感谢研究人员的开源工作!
我在近期的文献阅读中发现了“Efficient Robust Bayesian Optimization for Arbitrary Uncertain Inputs”这篇由您撰写的文章,这篇文章我认为非常优秀。在学习过程中,我注意到文章标注了其将开源于HEBO项目,不过可能受限于时间问题,目前该部分尚未更新。
非常期待该部分的后续更新!
同时祝各位新年快乐!

Definition of "compositional" in "compositional optimizers"

At one point, I may have understood this, but I find myself often wondering and asking again: what does "compositional" in "compositional optimizer" refer to? Part of my confusion probably stems from my materials informatics background, where composition often refers to the chemical make-up (i.e. chemical formula) of a particular compound. In other cases, it just implies that the contribution of individual components sums to one.

https://github.com/huawei-noah/HEBO/tree/master/CompBO

missing library_file asap7.lib

We are reproducing the results of the work HEBO/BOiLS at master. An AssertionError about the missing of library_file asap7.lib is caught when running the following commands.
python ./core/algos/bo/boils/main_multi_boils.py --designs_group_id log2 --n_parallel $n_parallel 1 \ --seq_length 20 --mapping fpga --action_space_id extended --ref_abc_seq resyn2 \ --n_total_evals 200 --n_initial 20 --device 0 --lut_inputs 4 --use_yosys 1 \ --standardise --ard --acq ei --kernel_type ssk \ --length_init_discrete_factor .666 --failtol 40 \ --objective both \ --seed 0
Could you please provide it? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.