Coder Social home page Coder Social logo

bayesian-optimization / bayesianoptimization Goto Github PK

View Code? Open in Web Editor NEW
7.5K 133.0 1.5K 31.88 MB

A Python implementation of global optimization with gaussian processes.

Home Page: https://bayesian-optimization.github.io/BayesianOptimization/index.html

License: MIT License

Python 98.64% Makefile 0.59% Batchfile 0.77%
optimization gaussian-processes bayesian-optimization python simple

bayesianoptimization's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bayesianoptimization's Issues

ValueError: Invalid parameter kappa

I am running the same example script and getting following error...
ValueError: Invalid parameter kappa for estimator GaussianProcess. Check the list of available parameters with estimator.get_params().keys().

Is there a way to cache results?

Is there a way to run optimization for N iterations, save results and next time continue from the same point?

Or maybe run optimization on different computers, combine results together and continue from this combined set of data points?

benchmark comparison with sigopt and hyperopt?

anyone knows of published comparison between a varieties of hyper-parameter auto-tuning optimizer, including SigOpt, hyperopt, spearmint, moe, BayesianOptimization, pybo and gpyopt?

Hyperparameter optimization

Hi,

Just a quick question: I couldn't immediately spot in the code whether the gaussian process in this library optimizes its own hyperparameters. Do I have to pass in the length scale that suits my problem as an argument or will it do bayesian hyperparameter optimization here too?

Thanks!

-Reinier

scipy.optimize.OptimizeResult is not always an array

Hello,

Thank you

First of all, I want to extend many (many) thanks for putting this wonderful package and the great tutorial together.
I am constructing a small API around this package to make it easy to tune machine learning hyper parameters.

Summary

When invoking BayesianOptimization.maximize(init_points=init_points, n_iter=0), I have noticed that helpers.acq_max() can fail. In particular, the scipy.optimize.OptimizeResult object can be a scalar but the code expects an array. As a result, helpers.acq_max() can fail on these lines of code:

        # Store it if better than previous minimum(maximum).
        if max_acq is None or -res.fun[0] >= max_acq:
            x_max = res.x
            max_acq = -res.fun[0]

Symptom details

bo  = BayesianOptimization(f=obj_fnc, pbounds=hp_range)
bo.maximize(init_points=init_points, n_iter=0)

(...)

/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/bayes_opt/bayesian_optimization.py in maximize(self, init_points, n_iter, acq, kappa, xi, **gp_params)
    262                         gp=self.gp,
    263                         y_max=y_max,
--> 264                         bounds=self.bounds)
    265 
    266         # Print new header

/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/bayes_opt/helpers.py in acq_max(ac, gp, y_max, bounds)
     53 
     54         # Store it if better than previous minimum(maximum).
---> 55         if max_acq is None or -res.fun[0] >= max_acq:
     56             x_max = res.x
     57             max_acq = -res.fun[0]

IndexError: too many indices for array

Fix proposal

I was able to solve the issue by cloning this repo, looking into helpers.acq_max(), scipy.optimize.minimize and cipy.optimize.OptimizeResult and replacing -res.fun[0] by -res.fun.min().

Here is the fixed code:

        # Store it if better than previous minimum(maximum).
        if max_acq is None or -res.fun.min() >= max_acq:
            x_max = res.x
            max_acq = -res.fun.min()

Once these modifications were made and (I changed the imports in my code to use the modified bases_opt package), the initialization step was able to complete successfully:

Initialization
-------------------------------------------------------------------------------------------------------------------
 Step |   Time |      Value |   acc_improvement_factor |   batch_size |   lr_decay_factor |   lr_init |    window | 
    1 | 01m20s |    0.61373 |                   1.0335 |     249.4912 |            0.9903 |    0.0634 |   17.1281 | 
    2 | 01m22s |    0.37705 |                   1.0155 |     180.6748 |            0.9961 |    0.0647 |   14.6763 | 
Bayesian Optimization
-------------------------------------------------------------------------------------------------------------------
 Step |   Time |      Value |   acc_improvement_factor |   batch_size |   lr_decay_factor |   lr_init |    window | 

Disclaimer

I am new to Bayesian Optimization, scipy.optimize.minimize and cipy.optimize.OptimizeResult so the fix suggestion might not be correct. Feel free to close this issue as 'user error' if you feel my use case should not occur under normal circumstances (which means I am doing something wrong).

Hit me up if you have any questions or need more details.

Thanks again!

Too many indices for array

Hi,
I"m not sure if it is this package or sklearn problem, but I'm getting this type of error when I use BayesianOptimization with XGBRegressor:

/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/gaussian_process/gpr.py:427: UserWarning: fmin_l_bfgs_b terminated abnormally with the  state: {'warnflag': 2, 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', 'grad': array([-0.00010668]), 'nit': 3, 'funcalls': 48}
  " state: %s" % convergence_dict)
/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/gaussian_process/gpr.py:427: UserWarning: fmin_l_bfgs_b terminated abnormally with the  state: {'warnflag': 2, 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', 'grad': array([ -4.25553349e-05]), 'nit': 5, 'funcalls': 52}
  " state: %s" % convergence_dict)
Traceback (most recent call last):
  File "./bopt.py", line 119, in <module>
    mbo.maximize()
  File "build/bdist.macosx-10.10-x86_64/egg/bayes_opt/bayesian_optimization.py", line 245, in maximize
  File "build/bdist.macosx-10.10-x86_64/egg/bayes_opt/helpers.py", line 35, in acq_max

I'm not sure which part should be fixed, but I'll be interested to know your feedback.
Thanks,
Valentin.

Deprecation Warning from scikit-learn

Hi,
I'm getting this deprecation warning from the example code. It still works for now but just wanted to bring it to your attention.

Also I'm curious what the solution is, I tried solving the issue but don't have enough programming chops to understand where the fix should be applied.

Error and full stack trace below

c:\Miniconda3\envs\py27\lib\site-packages\sklearn\utils\validation.py:386: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.
  DeprecationWarning)

Full Stack Trace

  c:\users\canyon\desktop\test\usage.py(28)<module>()
     26 # we let the algorithm do its magic by calling the maximize()
     27 # method.
---> 28 bo.maximize(init_points=15, n_iter=25)
     29
     30 # The output values can be accessed with self.res

  c:\users\canyon\desktop\test\build\bdist.win-amd64\egg\bayes_opt\bayesian_optimization.py(297)maximize()

  c:\users\canyon\desktop\test\build\bdist.win-amd64\egg\bayes_opt\bayesian_optimization.py(44)acq_max()

  c:\miniconda3\envs\py27\lib\site-packages\scipy\optimize\_minimize.py(444)minimize()
    442     elif meth == 'l-bfgs-b':
    443         return _minimize_lbfgsb(fun, x0, args, jac, bounds,
--> 444                                 callback=callback, **options)
    445     elif meth == 'tnc':
    446         return _minimize_tnc(fun, x0, args, jac, bounds, callback=callback,

  c:\miniconda3\envs\py27\lib\site-packages\scipy\optimize\lbfgsb.py(320)_minimize_lbfgsb()
    318                 # minimization routine wants f and g at the current x
    319                 # Overwrite f and g:
--> 320                 f, g = func_and_grad(x)
    321         elif task_str.startswith(b'NEW_X'):
    322             # new iteration

  c:\miniconda3\envs\py27\lib\site-packages\scipy\optimize\lbfgsb.py(266)func_and_grad()
    264     if jac is None:
    265         def func_and_grad(x):
--> 266             f = fun(x, *args)
    267             g = _approx_fprime_helper(x, fun, epsilon, args=args, f0=f)
    268             return f, g

  c:\miniconda3\envs\py27\lib\site-packages\scipy\optimize\optimize.py(285)function_wrapper()
    283     def function_wrapper(*wrapper_args):
    284         ncalls[0] += 1
--> 285         return function(*(wrapper_args + args))
    286
    287     return ncalls, function_wrapper

  c:\users\canyon\desktop\test\build\bdist.win-amd64\egg\bayes_opt\bayesian_optimization.py(44)<lambda>()

  c:\users\canyon\desktop\test\build\bdist.win-amd64\egg\bayes_opt\helpers.py(34)EI()

  c:\miniconda3\envs\py27\lib\site-packages\sklearn\gaussian_process\gaussian_process.py(416)predict()
    414
    415         # Check input shapes
--> 416         X = check_array(X)
    417         n_eval, _ = X.shape
    418         n_samples, n_features = self.X.shape

> c:\miniconda3\envs\py27\lib\site-packages\sklearn\utils\validation.py(389)check_array()
    387                 import ipdb
    388                 ipdb.set_trace()
--> 389             array = np.atleast_2d(array)
    390             # To ensure that array flags are maintained
    391             array = np.array(array, dtype=dtype, order=order, copy=copy)

No ../input/train.csv to run the XGBoost example

We want to try the XGBoost code in examples but it fails because of the lack of ../input/train.csv.

def prepare_data():
    train = pd.read_csv('../input/train.csv')
    categorical_columns = train.select_dtypes(include=['object']).columns

It would be great if we have more instruction to make it works.

Problem to define objective function in numpy array

Hi!
I have some real value inputs (in numpy array) and also the corresponding observation from a black box function (also in numpy). How can construct in appropriate form in BayesianOptimization? In tutorial notebook there is an analytical function which I think is for illustration goal.

Thanks for help!

Exception: Bad point. Try increasing theta0.

I had the optimization running for about 600 iterations when suddenly this exception happend in bo.maximize():


Exception Traceback (most recent call last)
in ()
5 sys.stdout.flush()
6
----> 7 bo.maximize(n_iter=1)
8
9 plt.hist(bo.res["all"]["values"], bins=50)

D:...\pyenv\lib\site-packages\bayes_opt\bayesian_optimization.py in maximize(self, init_points, restarts, n_iter, acq, **gp_params)
292 # Find unique rows of X to avoid GP from breaking
293 ur = unique_rows(self.X)
--> 294 gp.fit(self.X[ur], self.Y[ur])
295
296 # Finding argmax of the acquisition function.

D:...\pyenv\lib\site-packages\sklearn\gaussian_process\gaussian_process.py in fit(self, X, y)
352 self.reduced_likelihood_function()
353 if np.isinf(self.reduced_likelihood_function_value_):
--> 354 raise Exception("Bad point. Try increasing theta0.")
355
356 self.beta = par['beta']

Exception: Bad point. Try increasing theta0.

Discrete values for variables

I'm trying to optimize a system whose target function (unknown) has 5 parameters/variables, where one of them can only have discrete values, e.g. the range for this parameter is 0-10 but the valid values are only 0, 1, 5, and 10.

The problem is that the bayesian framework explores/exploits the optimizarion space asigning continuous values for this parameter, that is, the utility function returns any value in the range 0-10.

I'm trying to fix this but I don't know how to do it. Any ideas/suggestions?

Thanks in advance.

Except method

It would be helpful to have an except method for the BayesianOptimization class. The idea would be to pass parameter values that you don't want to search. This would come in handy, for example, when tuning learning rates close to zero. I'm tuning three separate learning rates close to zero and it automatically searches each learning rate exactly equal to zero. I feel like this is buggy behavior, not sure if it comes from sklearn or bayes-opt. The model I'm trying to train is particularly tricky, and often the random initializations are degenerately better than the trained models. Thus, the optimization process exploits these degenerate cases thinking that they are optimal points.

It should be an easy fix, simply a matter of checking the right condition over an input list/dict. I'll try to submit a pull request and fix it myself if I get some time this weekend.

Allow Prior Guesses for Parameters?

Hey there, thank you so much for this project, I use it all the time and really get a lot out of it. I have a question/idea for a possible enhancement. Would it be difficult to allow a user a guess (and maybe some certainty level) of the optimal value of each parameter in addition to specifying the region to search over? Do you think this would be keeping with the idea behind the project? For me, I often feel that I have a strong sense of where the right parameters are but want to allow for a decent sized range in case I am wrong.

code for bayesian_optimization.gif

Hi there,
Could you please provide the code and the data for generating bayesian_optimization.gif? It would be nice to have more information of this.

2D plot

Not a bug, but would you be willing to post your code for generating those 2d plots ('Bayesian Optimization in Action') that are in the readme?

Can't run the main example: ValueError: Invalid parameter corr for estimator GaussianProcessRegressor. Check the list of available parameters with `estimator.get_params().keys()`.

I just tried the example in this notebook in Python 3.5:

from bayes_opt import BayesianOptimization

def target(x):
    return np.exp(-(x - 2)**2) + np.exp(-(x - 6)**2/10) + 1/ (x**2 + 1)

x = np.linspace(-2, 10, 1000)
y = target(x)

plt.plot(x, y)

bo = BayesianOptimization(target, {'x': (-2, 10)})

gp_params = {'corr': 'cubic'}
bo.maximize(init_points=2, n_iter=0, acq='ucb', kappa=5, **gp_params)

but the last step fails with:

ValueError: Invalid parameter corr for estimator GaussianProcessRegressor. 
Check the list of available parameters with `estimator.get_params().keys()`.

Bug in initialize function

initialize function assumes keys are sorted and appear after 'target' in lexicographic order.

Code to reproduce:

from bayes_opt import BayesianOptimization


def target(a):
    """ Function to return square of argument """
    return a**2


def target2(x):
    """ Same function with different name of argument """
    return x**2


def test_bo_initialization(bo, func):
    print('X: ', bo.x_init)
    print('Y: ', bo.y_init)
    mismatch = False
    for x, y in zip(bo.x_init, bo.y_init):
        if func(*x) != y:
            mismatch = True
    if mismatch:    
        print('ERROR: x_init and y_init do not match!')
    else:
        print('INFO: x_init and y_init match.')

        
bo = BayesianOptimization(target, {'a': (0, 10)})
bo.initialize(
    {
        'target': [1, 4, 9],
        'a': [1, 2, 3],
    })
test_bo_initialization(bo, target)
# mismatch is True!
# Output: 
# X:  [[1], [4], [9]]
# Y:  [1, 2, 3]
# ERROR: x_init and y_init do not match!

bo2 = BayesianOptimization(target2, {'x': (0, 10)})
bo2.initialize(
    {
        'target': [1, 4, 9],
        'x': [1, 2, 3]
    })
test_bo_initialization(bo2, target2)
# mismatch is False.
# Output: 
# X:  [[1], [2], [3]]
# Y:  [1, 4, 9]
# INFO: x_init and y_init match.

Error

Can I know what does the following error mean and how I can solve it?

/home/ubuntu/.local/lib/python2.7/site-packages/sklearn/gaussian_process/gpr.py:427: UserWarning: fmin_l_bfgs_b terminated abnormally with the  state: {'warnflag': 2, 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', 'grad': array([ 0.00119495]), 'nit': 6, 'funcalls': 56}
  " state: %s" % convergence_dict)
/home/ubuntu/.local/lib/python2.7/site-packages/sklearn/gaussian_process/gpr.py:308: UserWarning: Predicted variances smaller than 0. Setting those variances to 0.
  warnings.warn("Predicted variances smaller than 0. "
/home/ubuntu/.local/lib/python2.7/site-packages/sklearn/gaussian_process/gpr.py:427: UserWarning: fmin_l_bfgs_b terminated abnormally with the  state: {'warnflag': 2, 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', 'grad': array([-0.02224928]), 'nit': 6, 'funcalls': 58}
  " state: %s" % convergence_dict)
/home/ubuntu/.local/lib/python2.7/site-packages/sklearn/gaussian_process/gpr.py:427: UserWarning: fmin_l_bfgs_b terminated abnormally with the  state: {'warnflag': 2, 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', 'grad': array([ 0.00198236]), 'nit': 6, 'funcalls': 54}
  " state: %s" % convergence_dict)
/home/ubuntu/.local/lib/python2.7/site-packages/sklearn/gaussian_process/gpr.py:427: UserWarning: fmin_l_bfgs_b terminated abnormally with the  state: {'warnflag': 2, 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', 'grad': array([ 0.0019799]), 'nit': 6, 'funcalls': 54}
  " state: %s" % convergence_dict)
/home/ubuntu/.local/lib/python2.7/site-packages/sklearn/gaussian_process/gpr.py:427: UserWarning: fmin_l_bfgs_b terminated abnormally with the  state: {'warnflag': 2, 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', 'grad': array([-0.0317517]), 'nit': 3, 'funcalls': 53}
  " state: %s" % convergence_dict)
/home/ubuntu/.local/lib/python2.7/site-packages/sklearn/gaussian_process/gpr.py:427: UserWarning: fmin_l_bfgs_b terminated abnormally with the  state: {'warnflag': 2, 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', 'grad': array([-0.03073608]), 'nit': 3, 'funcalls': 53}
  " state: %s" % convergence_dict)
/home/ubuntu/.local/lib/python2.7/site-packages/sklearn/gaussian_process/gpr.py:427: UserWarning: fmin_l_bfgs_b terminated abnormally with the  state: {'warnflag': 2, 'task': 'ABNORMAL_TERMINATION_IN_LNSRCH', 'grad': array([ 0.00182714]), 'nit': 6, 'funcalls': 54}
  " state: %s" % convergence_dict)

Exception: Bad point. Try increasing theta0.

I'm not really sure where this error results from -- I am not so familiar with the GP backend from sklearn.

Here's the stack trace. It happens at iteration number 86, but it's inconsistent.

Traceback (most recent call last):
...
bo.maximize(init_points = 15, n_iter=100)
File "build/bdist.linux-x86_64/egg/bayes_opt/bayesian_optimization.py", line 314, in maximize
File "/share/apps/python-2.7.9/lib/python2.7/site-packages/sklearn/gaussian_process/gaussian_process.py", line 352, in fit
raise Exception("Bad point. Try increasing theta0.")
Exception: Bad point. Try increasing theta0.

Pass in an objective function

Hi there,

I'm trying to maximize negative mean squared error, but I keep running into this bug.

TypeError: hyp_choice_sklearn() got an unexpected keyword argument x

Here is my code.

def hyp_choice_sklearn(hyperparameter_value,hyperparameter_value_two, y_tra = y_train,
 X_tra = X_train, X_tes = X_test, y_tes = y_test):
    nmse = - mean_squared_error(y_tes,
                                GradientBoostingRegressor(n_estimators=hyperparameter_value,
                                max_depth =hyperparameter_value_two).fit(X_tra.as_matrix()
                                ,y_tra.as_matrix()).predict( X_tes.as_matrix()))
    return nmse




bayes_opt_hyp = BayesianOptimization(hyp_choice_sklearn,
                          {'x': (1, 700),'y':(1,50) }) # the bounds to explore
bayes_opt_hyp .explore({'x': np.linspace(1,700,140),'y':np.linspace(1,50,140) }) # the points to explore, need the same size
bayes_opt_hyp.maximize(init_points=1, n_iter=10, acq='ei') # ten steps

Any idea what I'm doing wrong?

Also, unrelated but it would be nice to be able to search over different sized domains for each parameter.

Cheers!

'numpy.float64' object cannot be interpreted as an integer

Hi There,

I am running into issues using an objective function with this BayesianOptimization package.

Here is my objective function.

def hyp_choice_sklearn(hyperparameter_value,hyperparameter_value_two, y_tra = y_train, X_tra = X_train,
                       X_tes = X_test, y_tes = y_test):
    nmse = - mean_squared_error(y_tes,
                                GradientBoostingRegressor(n_estimators=hyperparameter_value,
                                max_depth =hyperparameter_value_two).fit(X_tra.as_matrix()
                                                                         ,y_tra.as_matrix()).predict(
                                    X_tes.as_matrix()))
return nmse

Here is my setup of the package

bayes_opt_hyp = BayesianOptimization(hyp_choice_sklearn,
                          {'hyperparameter_value': (1, 700),'hyperparameter_value_two':(1,71) }) # the bounds to explore
bayes_opt_hyp .explore({'hyperparameter_value': [int(i) for i in range(1,700,50)],
                        'hyperparameter_value_two':[int(i) for i in range(1,15)]}) # the points to explore, need the same size

and here is the results of my search.

screen shot 2017-07-24 at 12 46 50 pm

Any help is appreciated.

Modify syntax for gp.predict() in concert with sklearn 0.18

Hi there,

Minor issue, but in playing with the visualization notebook in Jupyter, it appears that minor changes to sklearn syntax in 0.18 have broken the tutorial (related to the posterior() function definition).

I was able to fix this as follows (I'm showing posterior_old() for the sake of comparison). It appears that eval_MSE has been deprecated, and instead return_std may be specified to get the standard deviation.

def posterior_old(bo, xmin=-2, xmax=10):
    xmin, xmax = -2, 10
    bo.gp.fit(bo.X, bo.Y)
    mu, sigma2 = bo.gp.predict(np.linspace(xmin, xmax, 1000).reshape(-1, 1), eval_MSE=True)
    return mu, np.sqrt(sigma2)

def posterior(bo, xmin=-2, xmax=10, n_points=1000):
    xmin, xmax = -2, 10
    bo.gp.fit(bo.X, bo.Y)
    mu,std = bo.gp.predict(np.linspace(xmin, xmax, n_points).reshape(-1, 1), return_std=True)
    return mu, std

Optimization with a noisy objective?

In "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning" ( http://arxiv.org/pdf/1012.2599v1.pdf ), linked to in the README.md, it says:

Bayesian optimization is a powerful strategy for finding the extrema of objective
functions that are expensive to evaluate. It is applicable in situations where one
does not have a closed-form expression for the objective function, but where one
can obtain observations (possibly noisy) of this function at sampled values. It
is particularly useful when these evaluations are costly, when one does not have
access to derivatives, or when the problem at hand is non-convex.

I have a problem where I can only obtain noisy observations of the objective function with reasonable evaluation cost. However, I'm not sure how to go about that using BayesianOptimization.

What I would like to do is to add some noise to the GP, such that the GP does not over-estimate the reliability of the objective function value, but I did not find any way to do that. If I understand it correctly, sklearn adds noise to GPs by adding a WhiteKernel to the main kernel. However, the GP settings in the BayesianOptimization class seems to be hard coded in __init__ as:

self.gp = GaussianProcessRegressor(
            kernel=Matern(),
            n_restarts_optimizer=25,
)

Is there some other way to handle a noisy objective function or do I need to modify __init__? Is any acquisition function more suitable when working with noisy objective functions?

Strange search results

I am trying to use Bayesian hyperparameter optimization and the points selected by this algorithm looks really weird to me. Here is an example:
Bayesian hyperparameter optimization
and an other one:
Bayesian hyperparameter optimization
There were only these two parameters being optimized, so it's not a problem of dimentions.
It looks like it sticks to some values and doesn't try neighbor points. Is it an expected behavior?
I used this example as a base.

Constraints

Currently, the package does not have a way to implement constraints into its optimization process. For example, one cannot exclude points. The linked discussion suggests that generalizing the exclusion problem to formal constraints is the better way to go.

After some investigation, I've found that the implementation will be pretty easy. Since the acq_max function uses SciPy's minimize function, it's simply a matter of allowing the user to optionally include constraints in the instantiation method and passing those constraints into minimize through acq_max. We'll also have to switch the optimization method from L-BFGS-B to SLSQP when the user passes constraints.

The only question I have about the switch is whether to give the user an option to pass the Jacobian of constraint functions. minimize has an option for that, but it only speeds the optimization up in some cases. Perhaps it's unnecessary.

I'll do this ASAP.

Optimization is very slow

Hi,

I think there is something wrong with the way BO runs at my machine. The BO process is very slow and gets slower with every iteration. I understand that some slowdown can be expected but not this much - from 3 sec to 5 mins.

Bayesian Optimization
----------------------------------------------------------------
 Step |   Time |      Value |   colsample_bytree |   subsample | 
    5 | 00m05s |    0.79714 |             0.6500 |      0.9990 | 
    6 | 00m06s |    0.79917 |             0.6500 |      0.6500 | 
    7 | 00m03s |    0.79250 |             0.6500 |      0.8206 | 
    8 | 05m13s |    0.79464 |             0.9990 |      0.9990 |

I tried some profiling and see that most of the time i spent here:


   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    96854  207.735    0.002  214.026    0.002 basic.py:227(solve_triangular)

The CPU i working a lot during this time.

I have all requirements up to date and installed from pip (linux 64 bit)

Is it normal behaviour? If not, any idea how to solve it?

issues with tensroflow models

Hello, I tried to use this package with tensorflow. I defined a function, wrapping all tensorflow related code there, then use BayesianOptimization to optimize learning_rate, batch_size, etc. However, after 4-7 iterations, it gave a "dead kernel" error and stopped running. Please let me know if you have any insights.
Thanks,
Xuejin

Values outside of the specified bounds are tried

Just a small problem I ran into last night while optimizing the max_depth of a sklearn RandomForestClassifier:

I set its bounds to (1, 15) and after a while it stopped with the error:

ValueError: max_depth must be greater than zero.

.. which shouldn't happen, since the lower bound is 1.

I was able to reproduce the problem with the following code (python 3.4, numpy 1.10, scipy 0.16, windows x64):

from bayes_opt import BayesianOptimization
import numpy as np

# simulate the random forests accuracy (step function of max_depth + some noise)
def f(x):
    assert x >= 1.0, "%0.27f" % x
    return -int(x) + np.random.randn()*0.2

for i in range(100):
    bo = BayesianOptimization(f=f, pbounds={"x": (1, 15)}, verbose=False)
    bo.maximize(init_points=3, n_iter=250, restarts=1, nugget=0.000001)

It takes a few seconds and then fails with AssertionError: 0.999999999999999888977697537

I'm not yet sure what really causes the problem, but a simple workaround might be to just set the bounds to (1.01, 15).

PS.: setting the random forests random_state, so it produces exactly the same results every time, does not help.

Examples fail with new version of scikit-learn

Testing examples with latest version of scikit-learn, scipy and numpy the examples fail

import sklearn as sk
sk.version
'0.16.1'

import numpy as np
np.version
'1.9.2'

import scipy as sp
sp.version
'0.16.0'

Low variances in plots

I'm trying to use this package for conducting an experiment with a design variable, and altering this variable to test performance. I don't know my objective function but I can sample from it. Values are in the range [200, 600].

When I try to plot the diagrams from the visualisation notebook, the mean looks alright but the variances are tiny, resulting in extremely tight 95% confidence intervals. I'm using the same functions (posterior, plot_gp) given in the visualisation notebook. All the examples I could find seem to have outputs within [-2,2].

Am I missing some sort of scaling step? Is this something that I'll have to change the default kernel, for? I notice that the default is a Matern kernel, but can't see an obvious way to access/change the length scale parameter, if this is even what's required.

Any help would be much appreciated. Cheers!

Initialization values must have unique scores, but shouldn't

Currently, the initialize method requires a dictionary of values and positions to be passed where the values are used as keys. This means that any two positions that share the same value cannot be passed to the routine, as the keys will clash. Internally it appears there is nothing that prevents this change from happening as the positions/values are eventually stored as arrays anyway. Ideally, initialize should accept an array of dictionaries instead, such that positions that share the same value do not clash.

using for multi-dimensional data

i just crash coursing the Gaussian Process with Bayesian Optimization with this code.

and i'm handling some multi-dimensional data look like following.
[ 1.59000000e-01 3.10000000e-01 7.00000000e-01 7.00000000e-03
5.00000000e-03 2.27000000e+00 9.90000000e-01 1.00000000e-02
2.00000000e-01 4.00000000e-02 ] with target class in real number.

how can i run this code with this data?

i've tried to input this data into your example code, but it was too naive i guess.

inf/NaN bug

I am getting a strange error when running a slightly modified example you provided (added factor y):

from bayes_opt import BayesianOptimization
import numpy as np

def target(x,y):
    return (np.exp(-(x - 2)**2) + np.exp(-(x - 6)**2/10) + 1/ (x**2 + 1))*y

bo = BayesianOptimization(target, {'x': (-2,10),'y':(1,2)})

gp_params = {'corr': 'cubic'}
bo.maximize(init_points=2, n_iter=15, acq='ucb', kappa=5, **gp_params)

here is the complete trace back:

Optimization failed. Try increasing the ``nugget``
capi_return is NULL
Call-back cb_calcfc_in__cobyla__user__routines failed.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 807, in runfile
    execfile(filename, namespace)
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 95, in execfile
    exec(compile(open(filename, 'rb').read(), filename, 'exec'), namespace)
  File "C:/SciSoft/WinPython-64bit-3.5.1.1/python-3.5.1.amd64/Scripts/bay_opt_test.py", line 10, in <module>
    bo.maximize(init_points=2, n_iter=15, acq='ucb', kappa=5, **gp_params)
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\Lib\site-packages\BayesianOptimization\bayes_opt\bayesian_optimization.py", line 326, in maximize
    self.gp.fit(self.X[ur], self.Y[ur])
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\lib\site-packages\sklearn\gaussian_process\gaussian_process.py", line 340, in fit
    self._arg_max_reduced_likelihood_function()
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\lib\site-packages\sklearn\gaussian_process\gaussian_process.py", line 734, in _arg_max_reduced_likelihood_function
    raise ve
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\lib\site-packages\sklearn\gaussian_process\gaussian_process.py", line 731, in _arg_max_reduced_likelihood_function
    iprint=0)
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\lib\site-packages\scipy\optimize\cobyla.py", line 172, in fmin_cobyla
    **opts)
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\lib\site-packages\scipy\optimize\cobyla.py", line 258, in _minimize_cobyla
    dinfo=info)
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\lib\site-packages\scipy\optimize\cobyla.py", line 248, in calcfc
    f = fun(x, *args)
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\lib\site-packages\sklearn\gaussian_process\gaussian_process.py", line 704, in minus_reduced_likelihood_function
    theta=10. ** log10t)[0]
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\lib\site-packages\sklearn\gaussian_process\gaussian_process.py", line 606, in reduced_likelihood_function
    C = linalg.cholesky(R, lower=True)
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\lib\site-packages\scipy\linalg\decomp_cholesky.py", line 81, in cholesky
    check_finite=check_finite)
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\lib\site-packages\scipy\linalg\decomp_cholesky.py", line 20, in _cholesky
    a1 = asarray_chkfinite(a)
  File "C:\SciSoft\WinPython-64bit-3.5.1.1\python-3.5.1.amd64\lib\site-packages\numpy\lib\function_base.py", line 1022, in asarray_chkfinite
    "array must not contain infs or NaNs")
ValueError: array must not contain infs or NaNs

I don't see what is wrong with the function. When I set 'nugget' to 1e-1 the error no longer occurs. What exactly does nugget do?

Improve accuracy when parameter is an integer

The BO maximize method generates parameter values for iterations as type float. This is expected and is fine. However some functions (e.g. xgboost) have parameters that need to be an integer. The current conversion practice is for the function to cast the parameter to an integer e.g. int(max_depth). This will cast a float of say 4.8 as an integer of 4.

The issue is that the score was calculated with a parameter of 4, but when returned to BO, BO uses it in future calculations as if it was calculated at 4.8. My thinking is that this will lead to inaccuracies and degradation of the optimization process.

This could possibly be solved by having the function also return all the parameter values that were actually used for the calculation so that BO could update the parameters for that iteration, enabling BO to use the correct parameter value (the integer) when calculating the parameters for the next iteration.

This would accommodate any changes (rounding or otherwise) that need to be made to the parameters in the function while protecting the accuracy of the optimization calculations.

  1. What are you thoughts on this as a long-term solution?

  2. Your site examples cast float to integers as per int(x). This does a truncation (e.g. 4.8 to 4) and might lead to inaccuracies. I've used int(round(x)) to reduce the error in casting to an integer. If you think it appropriate the examples would be updated to use this approach.

  3. In an attempt to get around this issue in the interim, I am processing the iterations in small batches of 2 iterations (e.g. bayesOpt.maximize(n_iter=2, acq="ei", xi=0.1) and then rounding the integer parameter values in bayesOpt.res['all']['params']. I am hoping that by updating this dictionary with the integer values that were actually used to produce the score, that these correct values will then be used by BO when it calculates the next parameters. Will updating this dictionary this way have this effect?

  4. For the initialization observations (init_points) it might be appropriate to treat the parameters in the same way. i.e. have all actual parameters used returned from the function and the relevant BO dictionary updated ready for optimization processing.

This issue might be related to and resolve some of the discussions in issue #13.
I like the package and hope this is useful.
cheers

IndexError: too many indices for array

xgboostBO = BayesianOptimization(xgboostcv,
                                 {'max_depth': (5, 10),
                                  'learning_rate': (0.01, 0.3),
                                  'n_estimators': (25, 250),
                                  'gamma': (1., 0.01),
                                  'min_child_weight': (2, 10),
                                  'max_delta_step': (0, 0.1),
                                  'subsample': (0.7, 0.8),
                                  'colsample_bytree' :(0.5, 0.99)
                                 })

xgboostBO.maximize()
Initialization
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Step |   Time |      Value |   colsample_bytree |     gamma |   learning_rate |   max_delta_step |   max_depth |   min_child_weight |   n_estimators |   subsample | 
    1 | 01m27s |    0.67938 |             0.9894 |    0.9722 |          0.1003 |           0.0734 |      9.6590 |             8.5031 |       173.8439 |      0.7015 | 
    2 | 01m33s |    0.71073 |             0.6462 |    0.1777 |          0.0800 |           0.0760 |      9.6390 |             2.0108 |       246.1404 |      0.7412 | 
    3 | 00m58s |    0.75769 |             0.7873 |    0.4747 |          0.2951 |           0.0602 |      8.6310 |             7.8762 |       146.3374 |      0.7817 | 
    4 | 01m27s |    0.76096 |             0.9346 |    0.0707 |          0.2805 |           0.0501 |      7.3459 |             3.6551 |       201.3206 |      0.7445 | 
    5 | 00m18s |    0.62238 |             0.6568 |    0.4704 |          0.1482 |           0.0977 |      7.1608 |             5.4243 |        55.5475 |      0.7925 | 
---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-15-a0aa4f2ba568> in <module>()
     10                                  })
     11 
---> 12 xgboostBO.maximize()

/home/sasha/.local/lib64/python3.4/site-packages/bayes_opt/bayesian_optimization.py in maximize(self, init_points, n_iter, acq, kappa, xi, **gp_params)
    262                         gp=self.gp,
    263                         y_max=y_max,
--> 264                         bounds=self.bounds)
    265 
    266         # Print new header

/home/sasha/.local/lib64/python3.4/site-packages/bayes_opt/helpers.py in acq_max(ac, gp, y_max, bounds)
     53 
     54         # Store it if better than previous minimum(maximum).
---> 55         if max_acq is None or -res.fun[0] >= max_acq:
     56             x_max = res.x
     57             max_acq = -res.fun[0]

IndexError: too many indices for array

Python 3, up-to-date versions. Any ideas? Thanks.

Enhancement Request, Display Int values in print statement, cast int values to GP process

Is it possible to have a better method print to int values?
(Even better to be able to cast them to the GP as ints)

Simple example, with Random Forests:

RFC(n_estimators=int(n_estimators),
        min_samples_split=int(min_samples_split),
        max_features=min(max_features, 0.999),
        random_state=2)

When running it may print out:
n_estimators = 10.3456, min_samples_split = 2.35643, max_features=0.99

I would expect it to print out:
n_estimators = 10, min_samples_split = 2, max_features=0.99

Going deeper, because its floats that have been passed, it may search the 'same' space again:

n_estimators = 10.3456, min_samples_split = 2.35643, max_features=0.99
n_estimators = 10.6334, min_samples_split = 2.12329, max_features=0.99

It seems rather wasteful to search int spaces.

I say this with no idea on how this affects the underlying GP process that is being called

Same observation being generatad

Hi,

I tried to run the code below to optimize a XGBoost classifier, but get stuck with same observation being tested all time. I expected some new observation being generated... or am I wrong?

Console output (after initial points generated). Notice that all iterations generates the same observation:

('XGB', {'num_round': 20.0, 'subsample': 0.25, 'eta': 0.01, 'colsample_bytree': 0.25, 'max_depth': 2.0})
Iteration:   1 | Last sampled value:   -0.680226 | with parameters:  {'num_round': 20.0, 'subsample': 0.25, 'eta': 0.01, 'colsample_bytree': 0.25, 'max_depth': 2.0}
               | Current maximum:      -0.245901 | with parameters:  {'num_round': 28.712248896201515, 'subsample': 0.88492808306639748, 'eta': 0.78136949498158781, 'colsample_bytree': 0.99625386365127699, 'max_depth': 5.3806033554623252}
               | Time taken: 0 minutes and 10.953415 seconds

('XGB', {'num_round': 20.0, 'subsample': 0.25, 'eta': 0.01, 'colsample_bytree': 0.25, 'max_depth': 2.0})
Iteration:   2 | Last sampled value:   -0.680226 | with parameters:  {'num_round': 20.0, 'subsample': 0.25, 'eta': 0.01, 'colsample_bytree': 0.25, 'max_depth': 2.0}
               | Current maximum:      -0.245901 | with parameters:  {'num_round': 28.712248896201515, 'subsample': 0.88492808306639748, 'eta': 0.78136949498158781, 'colsample_bytree': 0.99625386365127699, 'max_depth': 5.3806033554623252}
               | Time taken: 0 minutes and 10.790525 seconds

('XGB', {'num_round': 20.0, 'subsample': 0.25, 'eta': 0.01, 'colsample_bytree': 0.25, 'max_depth': 2.0})
Iteration:   3 | Last sampled value:   -0.680226 | with parameters:  {'num_round': 20.0, 'subsample': 0.25, 'eta': 0.01, 'colsample_bytree': 0.25, 'max_depth': 2.0}
               | Current maximum:      -0.245901 | with parameters:  {'num_round': 28.712248896201515, 'subsample': 0.88492808306639748, 'eta': 0.78136949498158781, 'colsample_bytree': 0.99625386365127699, 'max_depth': 5.3806033554623252}
               | Time taken: 0 minutes and 10.6884 seconds

Full code for the program (uses the xgboost library)

import xgboost as xgb
from bayes_opt import BayesianOptimization
from sklearn.datasets import make_classification

X, y = make_classification(n_samples=2500, n_features=45, n_informative=12, n_redundant=7, n_classes=2, random_state=42)


def xgbcv(max_depth, eta, colsample_bytree, subsample, num_round):
    print("XGB", locals())

    dtrain = xgb.DMatrix(X, label=y)

    params = {
        'booster': 'gbtree',
        'objective': 'multi:softprob',
        'silent': 1,
        'max_depth': int(round(max_depth)),
        'eta': eta,
        'colsample_bytree': colsample_bytree,
        'subsample': subsample,
        'num_class': 2,
        'eval_metric': 'mlogloss',
        'seed': 42
    }

    r = xgb.cv(params, dtrain, int(round(num_round)), nfold=4, metrics={'mlogloss'}, seed=45, show_stdv=False)

    return -r['test-mlogloss-mean'].mean()


xgbBO = BayesianOptimization(xgbcv, {
    'max_depth': (2, 6),
    'eta': (0.01, 0.8),
    'colsample_bytree': (0.25, 1.0),
    'subsample': (0.25, 1.0),
    'num_round': (20, 30),
}, verbose=True)

xgbBO.maximize(init_points=32, n_iter=6)

Thanks in advice!

Returning n points

If the goal return n points the maximize the unknown function, is it possible to use Bayesian optimization in this package. For example, I tried with function (x-1)**2 (( x mines one square)), it is always return one number. Do any one has an idea how implement it to return n points?

PS: I am sorry this is not an issue but I could not find any other place to ask.

Passing Integer Params

Hi,

Great package, just one thing that bother me is the option for integer input optimization. For my function i've got about 4 integers and 1 float, currently i'm doing param1=int(param1) in my function to overcome this, but, i'm wasting steps since it tries various times with 4.2, 4.1, 4.24 which are all actually 4 in my code.

Is there a way around it?

Thank you

usage.py gives error with Scipy 0.17.0

It seems that the initialization of GaussianProcess is the issue. Getting rid of the dimension fixed the problem for me.

        self.gp = GaussianProcess(theta0=np.random.uniform(0.001, 0.05),
                                  thetaL=1e-5,
                                  thetaU=1e0,
                                  random_start=30)

Error message:

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
<ipython-input-10-bd49ddf8b378> in <module>()
      1 bo = BayesianOptimization(search_fun, {'x': (min(x), max(x)), 'y': (min(y), max(y))})
----> 2 bo.maximize(init_points=4, n_iter=40, kappa=2)
      3 print(bo.res['all'])

/home/mai/anaconda3/lib/python3.4/site-packages/bayes_opt/bayesian_optimization.py in maximize(self, init_points, n_iter, acq, kappa, **gp_params)
    289         ur = unique_rows(self.X)
    290         print(self.X[ur], self.Y[ur])
--> 291         self.gp.fit(self.X[ur], self.Y[ur])
    292 
    293         # Finding argmax of the acquisition function.

/home/mai/anaconda3/lib/python3.4/site-packages/sklearn/gaussian_process/gaussian_process.py in fit(self, X, y)
    336                       "autocorrelation parameters...")
    337             self.theta_, self.reduced_likelihood_function_value_, par = \
--> 338                 self._arg_max_reduced_likelihood_function()
    339             if np.isinf(self.reduced_likelihood_function_value_):
    340                 raise Exception("Bad parameter region. "

/home/mai/anaconda3/lib/python3.4/site-packages/sklearn/gaussian_process/gaussian_process.py in _arg_max_reduced_likelihood_function(self)
    728                         optimize.fmin_cobyla(minus_reduced_likelihood_function,
    729                                              np.log10(theta0), constraints,
--> 730                                              iprint=0)
    731                 except ValueError as ve:
    732                     print("Optimization failed. Try increasing the ``nugget``")

/home/mai/anaconda3/lib/python3.4/site-packages/scipy/optimize/cobyla.py in fmin_cobyla(func, x0, cons, args, consargs, rhobeg, rhoend, iprint, maxfun, disp, catol)
    170 
    171     sol = _minimize_cobyla(func, x0, args, constraints=con,
--> 172                            **opts)
    173     if iprint > 0 and not sol['success']:
    174         print("COBYLA failed to find a solution: %s" % (sol.message,))

/home/mai/anaconda3/lib/python3.4/site-packages/scipy/optimize/cobyla.py in _minimize_cobyla(fun, x0, args, constraints, rhobeg, tol, iprint, maxiter, disp, catol, **unknown_options)
    237     cons_lengths = []
    238     for c in constraints:
--> 239         f = c['fun'](x0, *c['args'])
    240         try:
    241             cons_length = len(f)

/home/mai/anaconda3/lib/python3.4/site-packages/sklearn/gaussian_process/gaussian_process.py in <lambda>(log10t, i)
    705             for i in range(self.theta0.size):
    706                 constraints.append(lambda log10t, i=i:
--> 707                                    log10t[i] - np.log10(self.thetaL[0, i]))
    708                 constraints.append(lambda log10t, i=i:
    709                                    np.log10(self.thetaU[0, i]) - log10t[i])

GP with UCB bug?

I have a weird behaviour of the GP with UCB (with noisy data)
the posterior of the GP changes completely after a point, e.g. from:
ucb8 xi 0 0 k 11 0t
to
ucb9 xi 0 0 k 11 0t

and from
ucb11 xi 0 0 k 11 0t
to
ucb12 xi 0 0 k 11 0t

Another weird behavior is that is some cases UCB does not select the upper bound (the last selected point is in yellow). I would expect that -2 is selected, but is not:
ucb12 xi 0 0 k 11 0t

ucb13 xi 0 0 k 11 0t

ucb14 xi 0 0 k 11 0t

is this a bug or is this normal?

Minimize

Hi, what are the required changes if I want to find the minimum instead of the maximum?
I did some changes but seems that it does not work. Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.