Coder Social home page Coder Social logo

ziatdinovmax / gpax Goto Github PK

View Code? Open in Web Editor NEW
196.0 7.0 24.0 54.57 MB

Gaussian Processes for Experimental Sciences

Home Page: http://gpax.rtfd.io

License: MIT License

Python 99.80% Shell 0.20%
gaussian-processes bayesian-inference machine-learning active-learning bayesian-optimization deep-kernel-learning hypothesis-learning bayesian-neural-networks multi-fidelity-learning

gpax's Introduction

Hi there 👋

My expertise lies in designing and implementing custom machine learning solutions that drive research and development, with a focus on AI-powered decision-making. With a proven track record of collaborating closely with academic and industry partners, I excel at translating complex domain-specific challenges into efficient machine-learning codes and workflows. During my 9-year tenure at the U.S. Department of Energy’s Oak Ridge National Laboratory, I led the development of machine learning codes that enabled autonomous experimentation in scanning probe and electron microscopy, and were later extended to neutron scattering experiments, chemical synthesis, and battery state-of-health assessments. My primary interest lies in developing the "smart labs" of the future, where human-AI collaboration paves the way for rapid scientific innovation and practical applications in various fields.

My Latest Blog Posts 📖:

My Recent Papers 📜

gpax's People

Contributors

aghosh92 avatar arpanbiswas52 avatar matthewcarbone avatar sagarsadhu avatar ziatdinovmax avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpax's Issues

Set noise prior automatically

We should be setting the noise prior automatically based on the y-range of the provided training data. Currently, the default prior is LogNormal(0, 1), which may not be always optimal, especially when using data with normalized y-range (because it allows for scenarios where the noise is almost one order of magnitude larger than the entire observation range). So instead, we can set it automatically as a HalfNormal(0, v) distribution with a default variance of v = 0.2 * (y_max - y_min). As always, a user will also have an option to pass their own custom prior.

Thoughts? @yongtaoliu, @arpanbiswas52, @SergeiVKalinin, @RichardLiuCoding, @aghosh92?

Any suggestions on how to improve acquisition.UCB for active GP example?

I changed the test function in the gpax-GPBO tutorial to the following:

y(x) = 1 / (x ** 2 + 1) * np.cos(np.pi * x)

for x in [-2,5] and obtained the following:

dataset_1

It looks like the acquisition.UCB function keeps getting a maxima in the same region -- even though the posterior mean and variance have shrunk.

Effect of changing the noise prior from

numpyro.distributions.HalfNormal(0.01)

to

numpyro.distributions.Normal(1)

dataset_1

1. Is there an alternative acquisition function that I should be using/some alternative value for beta?

2. Is there a way to add some additional cost for trying to add more points in a densely populated region?

Feature: implement simulated campaigning for "hyper parameter tuning"

@ziatdinovmax as we discussed, I plan on implementing a simulated campaigning loop for tuning the "hyper parameters" of an optimization loop. I first want to learn this library inside and out so it might take some time. But anyways, the executive summary of the tasks at hand look something like this:

  • Develop a Campaign class for storing the state of and running the campaign
  • Parallelize the campaigning (since many simulations will have to be run); might want to consider mpi4py but more likely multiprocessing will be enough
  • Implement a smart checkpoint-restart system for the same reason
  • Write appropriate tests

Suggestion: activate dependabot

Dependabot is awesome and will automatically bump versions of required dependencies, open a PR and trigger a CI loop to test that everything is still working. I use it in my open source projects. @ziatdinovmax what do you think?

Note this is dependent on #107 since we can only have dependencies in either the requirements or pyproject file.

Implement faster smoke tests of notebooks

@ziatdinovmax referencing your suggestion in #50. The way to do this is to add an environment variable that is detected by the notebook. If, for example, SMOKE_TEST==1, we simply adjust how many iterations things train for, the size of datasets, stuff like this. Again GPyTorch has some good examples of how this can work in practice (they run the tests on their docs but it doesn't matter).

Option to use regular NN in viDKL

Currently, we automatically place priors over weights and biases of a neural network in viDKL, effectively turning it into BNN. It may be a good idea to make this optional and allows for using of a regular NN.

Unable to use gpu: might be a jax(or jaxlib) issue

When I am running the gpax_vidkl_eels.ipynb I am getting below warning:

An NVIDIA GPU may be present on this machine, but a CUDA-enabled jaxlib is not installed. Falling back to CPU.

How to reproduce:

I created the python env using following steps:
conda create -n gpax_hae python==3.10
conda activate gpax_hae
pip install -e . (inside gpax repository) ------- this install most of the dependencies including jax and jaxlib
pip install jupyter
pip install ipykernel

Sparse GP

While this package mostly aims at utilizing Fully Bayesian exact GPs, certain tasks (e.g. image and hyperspectral data reconstruction) may require utilizing sparse approximations with inducing points. There are, of course, multiple approaches to sparse GP. According to this study, a VFE approximation may be a good choice.

Extension of kernel='RBF',...

Hi,
Is it possible that the user would provide his/her own kernel? For the moment if I understand correctly
kernel='RBF', 'Periodic', or 'Matern'. But one may can use other kind of kernels and make some add/mult... composition.

Also, why for example in the following code

def MaternKernel(X: jnp.ndarray, Z: jnp.ndarray,
                 params: Dict[str, jnp.ndarray],
                 noise: int = 0, **kwargs: float) -> jnp.ndarray:

noise is an integer?

Thanks

Use of viMTDKL

Hi Maxim,

I was trying to test gapx.viMTDKL() on the dummy EELS dataset, but got stuck when preparing the data:

  1. Does the train_test_split() function support preparing dataset for viMTDKL?
  2. I started with a simple case: use a Lorentzian function to fit for the height and position, which are used as the two scalarizers defined on the same input dimensions (please see the last section of this shared notebook: link). However, there is always an error saying the dimension is not correct.
  3. I have followed your steps in this example notebook of "Theory-informed optimization of experiment with multi-task GP" (link), and the code can run without any issue (although super slow for the 3D dataset)
  4. There is a flag called "shared_input_space" in the gapx.viMTDKL(). When all the scalarizers are defined on the same coordinate/grid, should I make this flag true?

Best,
Richard

Add explanation/examples on how to use utils.priors

The utils.priors (code) have beed introduced to streamline the placement of priors over model parameters as well as to simplify the incorporation of prior mean functions. Several examples (in docstrings and/or in a separate notebook) of their usage would be beneficial.

Batch acquisition (bototrch style)

Per discussion in #57:
Implement the batch acquisition methods that allow one to set a value for q such that the best combination of q points that jointly optimize the acquisition function are chosen.

Open discussions?

Suggestion: open the discussions tab (in settings, I think) so I (and others) don't have to spam the issues with questions 😁

We consider noisy observations of a discontinuous function...

Hello,
I just read your excellent article and jump to your gpax lib. I already used numpyro for NUTS/SVI and a little practice for GP, and I was looking probably of what you are described in your GP_sGP.ipynb. I would like to be sure that I have understand the use case developed at the end of your nb.

Bellow I have display some data points and I know because of the process under the hood that there is a discontinuity at x close to (but not exactly) 0. Of course 1) I can fit a determinist parametrized function piecewise with an "linear" behaviour for xx0, 2) on a other hand I can use an RBF+noise GP but then I miss the discontinuity.

Do you think that your example is especially designed to keep up this discontinuity problem ?

image

Thanks

Kernels with a different length scale on each axis

Is there any way to use GPax in its current state with kernels of the form e.g.,

$$k(\mathbf{x}, \mathbf{x}') = e^{-\lambda_1(x_1 - x_1')^2} e^{-\lambda_2(x_2 - x_2')^2} e^{-\lambda_3(x_3 - x_3')^2}$$?

Where each dimension gets its own length scale, allowing for greater flexibility. Akin to ard_num_dims in GPyTorch. It shouldn't be too hard, right? Should just boil down to modifying the kernel codes so that params["k_length"] broadcasts correctly.

Quality-of-life suggestions for release, CI, semantic versioning, etc.

@ziatdinovmax I have a couple of suggestions for your consideration:

  • Pool all setup protocol into pyproject.toml. PEP 621 began the transition of Python setup to the "non-python" pyproject file. In GPax, the setup.py file is not necessary. We can pool all build instructions into the pyproject.toml file.
  • CI code can be slightly refactored for clarity.
  • Similarly, I have developed a procedure for on-the-fly semantic versioning from tags, similar to the old versioneer.py method, but my method is a few lines and relies on dunamai to grab the current HEAD's tag. I use this combined with "API-key-free" publishing to PyPI, my CI procedure, and GH environments to securely publish any new tagged release. The environment configuration requires manual confirmation before any release, in case you were worried about over-automation causing unintended things to happen. EDIT: there is a better way to do this using hatchling. See here.
  • Implement dependabot for automatic version bumps, and version-lock all dependencies. I think this is just good practice to ensure your code works even if a dependency releases an update that should be backwards compatible, but isn't. Example of that here.

The most recent example I have of all of this can be found here, with a slightly less recent example here.

I am more than happy to implement these QOL changes.

UIGP: Allow for different variance along different input feature dimensions

Currently, the UIGP class extends the standard Gaussian Process model to handle uncertain inputs. However, it assumes that the variance (sigma_x) is the same for all input feature dimension. It would be a good idea to extend it to scenarios where we expect different variance for different input parameters.

Test the new deployment system on the test PyPI server

repository-url: https://test.pypi.org/legacy/

@ziatdinovmax, just a reminder, I have set the deployment target to the testing PyPI server on purpose so you can play around with the new system without risk of deploying to the real PyPI.

Please let me know if you have any questions on how this works. Essentially, when you push a tagged commit to main, it should trigger an action that will test that commit, build everything, and then hold deployment until you approve through the environment (which you have to setup). That tag should read something like v0.1.9.

ExactGP.predict and viGP.predict produce inconsistent shapes

The predict method on ExactGP and viGP produce results of different shapes.

ExactGP.predict produces a 3-tensor, e.g. (2000, 200, 100).

viGP.predict produces a 1-tensor, e.g. (100,).

Is there any way to standardize the output of these methods? Also, it appears 2000 is the number of samples after warmup, and 200 samples from the posterior. Maybe the output of viGP.predict should be (1, 200, 100) to make it consistent (since there's only a single value for e.g. samples["k_length"]). This should be easy enough to do by just using mean, cov = self.get_mvn_posterior(X_new, samples, noiseless, **kwargs) to draw 200 samples, I think. Let me know if I have this totally wrong.

In addition, it begs the question if a ABC for GPs should really be being used. It would probably be best for the user if, in all cases possible, every core method of each GP produces the same type of object. I see that viGP inherits ExactGP, but it might be best to have ExactGP (or whatever the most base class is) inherit from some ABC.

Quality-of-life suggestions for release, CI, semantic versioning, etc.

Discussed in #56

Originally posted by matthewcarbone October 4, 2023
@ziatdinovmax I have a couple of suggestions for your consideration:

  • Pool all setup protocol into pyproject.toml. PEP 621 began the transition of Python setup to the "non-python" pyproject file. In GPax, the setup.py file is not necessary. We can pool all build instructions into the pyproject.toml file.
  • CI code can be slightly refactored for clarity.
  • Similarly, I have developed a procedure for on-the-fly semantic versioning from tags, similar to the old versioneer.py method, but my method is a few lines and relies on dunamai to grab the current HEAD's tag. I use this combined with "API-key-free" publishing to PyPI, my CI procedure, and GH environments to securely publish any new tagged release. The environment configuration requires manual confirmation before any release, in case you were worried about over-automation causing unintended things to happen. EDIT: there is a better way to do this using hatchling. See here.
  • Implement dependabot for automatic version bumps, and version-lock all dependencies. I think this is just good practice to ensure your code works even if a dependency releases an update that should be backwards compatible, but isn't. Example of that here.

The most recent example I have of all of this can be found here, with a slightly less recent example here.

I am more than happy to implement these QOL changes.

MCMC prediction stability issue - providing NAN values of variance (Issue in Google Colab GPU/HighRam setting)

**This is the issue encountered in Google Colab- under GPU setting T4 and High-RAM

When we run the function- ExactGP.fit()-- it produces NAN values for standard deviation calculation. The error can be repo in all the below modifications
These things I have already tried

  • Normalize the data in fitting the model
  • Tried with all the kernels: RBF, Matern and Periodic
  • Tried with diff prior function: LogNormal, Normal, HalfNormal which are mostly used anyway in GP/BO
  • Tried with diff noise priors
  • Also tried with diff sets of training data

With current workaround it seems with reducing the number of total samples in MCMC setting to num_warmup=500, num_samples=500 (Default is num_warmup=1000, num_samples=3000), it is able to provide reasonable outputs.

Feature: allow the user to scale X and y before fitting, predicting, etc.

Scaling features and targets to have "reasonable" values (usually between -1 and 1) is usually a pre-processing step that people who generally use GPs will know to do. However, it would be nice to have this functionality built into the GPs themselves.

So in other words, if a user presented an X with features that were not within the range of -1 to 1, these could be optionally scaled before fitting. Those scaling parameters could then be saved and reapplied during prediction.

Similarly, we could consider scaling the outputs as well. @ziatdinovmax what do you think?

Suggestion: refactor acquisition.py to a class structure

Currently, every acquisition function in acquisition.py is a function. I think these objects will function better as classes since they take a lot of common arguments and also share a common abstraction. In addition, it will make the campaigning in #33 much more straightforward. For example, while certain acquisition functions require different things (EI requires best_f whereas UCB requires beta), they could all have a common function, maybe update_as_function_of_data_and_observations that calculates best_f for EI and does nothing for UCB, allowing for extension to more complex acquisition functions later.

I will implement this as a backwards-compatible feature for #33 but I definitely think you should consider making this change!

Problem with models.sPM?

When using mean, samples = spm.predict(key2, X_unmeasured, take_point_predictions_mean=False), the predictions are calculated over measured points, rather then unmeasured point array that is passed to the .predict

Is it possible to filter nans for ExactGP.predict inside UCB Acquisition functions calls?

Sometimes the gpax.acquisition.UCB returns Nans. This is may be due to the ExactGP.predict returning nans for some samples?

See the below plot (UCB is not present as the array is full of nans)
Screen Shot 2023-08-23 at 3 34 26 pm

Changing the random seed for data generation:
Screen Shot 2023-08-23 at 3 35 38 pm

Code to reproduce nans

import gpax
import numpy as np
import matplotlib.pyplot as plt
import numpyro

gpax.utils.enable_x64()
SEED = 1
N_OBS = 6
X_RANGE = (-2, 5)

np.random.seed(SEED)


def observations(x, noise_sigma=0.05):
    noise = np.random.normal(0, noise_sigma, len(x))
    f = 1 / (x ** 2 + 1) * np.cos(np.pi * x)
    return f + noise


def generate_data():
    X_measured = np.random.uniform(*X_RANGE, N_OBS)
    X_unmeasured = np.linspace(*X_RANGE, 50)
    y_measured = observations(X_measured)
    y_true = observations(X_unmeasured, noise_sigma=0)
    return X_measured, y_measured, X_unmeasured, y_true


def get_gp_preds(X_measured, y_measured, X_unmeasured):
    rng_key1, rng_key2 = gpax.utils.get_keys(SEED)
    noise_prior = numpyro.distributions.Normal(1)
    gp_model = gpax.ExactGP(1, kernel='RBF', noise_prior_dist=noise_prior)
    gp_model.fit(rng_key1, X_measured, y_measured)

    y_pred, y_sampled = gp_model.predict(rng_key2, X_unmeasured, noiseless=True)
    y_up = np.nanquantile(y_sampled, 0.95, axis=0).ravel()
    y_low = np.nanquantile(y_sampled, 0.05, axis=0).ravel()
    ucb_values = gpax.acquisition.UCB(
        rng_key2, gp_model, X_unmeasured, beta=4,
        maximize=False, noiseless=True)

    return y_pred, y_up, y_low, ucb_values


def plot(X_measured, y_measured, X_unmeasured, y_true, y_pred, y_up, y_low, ucb_values):
    fig, ax = plt.subplots(1, 1, figsize=(4, 3))
    ax.plot(X_unmeasured, y_true, lw=3, ls='--', c='k', label='True', alpha=0.1)
    ax.scatter(X_measured, y_measured, c='k', label="Observations")
    ax.plot(X_unmeasured, y_pred, lw=2, c='tab:orange', label='Model')
    ax.fill_between(X_unmeasured, y_low, y_up, color='tab:orange', alpha=0.3)
    ax2 = ax.twinx()
    ax2.plot(X_unmeasured, ucb_values, lw=1.5, color='tab:purple', alpha=0.9, zorder=-100)
    ax.plot([], [], lw=1.5, color='tab:purple', alpha=0.9, zorder=-100, label='UCB')
    ax.legend(frameon=True)
    ax.set_xlim(X_RANGE)
    ax2.set_yticks([])
    fig.show()


def main():
    X_measured, y_measured, X_unmeasured, y_true = generate_data()
    y_pred, y_up, y_low, ucb_values = get_gp_preds(
        X_measured, y_measured, X_unmeasured
    )
    plot(
        X_measured, y_measured, X_unmeasured, y_true,
        y_pred, y_up, y_low, ucb_values
    )

    print(f"X_measured = {X_measured.tolist()}")
    print(f"y_measured = {y_measured.tolist()}")
    print(f"X_unmeasured = {X_unmeasured.tolist()}")


if __name__ == '__main__':
    main()

Data:


X_measured = [0.9191540329180179, 3.042271454095107, -1.9991993762785858, 0.1163280084228786, -0.9727087642802088, -1.3536298366184154]
y_measured = [-0.5510701474083296, -0.150299323805448, 0.24339790464416963, 0.8064144297202153, -0.42470373051379506, -0.19475225108675304]
X_unmeasured = [-2.0, -1.8571428571428572, -1.7142857142857144, -1.5714285714285714, -1.4285714285714286, -1.2857142857142858, -1.1428571428571428, -1.0, -0.8571428571428572, -0.7142857142857144, -0.5714285714285716, -0.4285714285714286, -0.2857142857142858, -0.14285714285714302, 0.0, 0.1428571428571428, 0.2857142857142856, 0.4285714285714284, 0.5714285714285712, 0.714285714285714, 0.8571428571428568, 1.0, 1.1428571428571428, 1.2857142857142856, 1.4285714285714284, 1.5714285714285712, 1.714285714285714, 1.8571428571428568, 2.0, 2.1428571428571423, 2.2857142857142856, 2.428571428571428, 2.571428571428571, 2.7142857142857144, 2.8571428571428568, 3.0, 3.1428571428571423, 3.2857142857142856, 3.428571428571428, 3.571428571428571, 3.7142857142857135, 3.8571428571428568, 4.0, 4.142857142857142, 4.285714285714286, 4.428571428571428, 4.571428571428571, 4.7142857142857135, 4.857142857142857, 5.0]

Fix documentation building

The documentation builds are failing seemingly because we no longer have __version__.py file.

Running Sphinx v6.2.1

Traceback (most recent call last):
  File "/home/docs/checkouts/readthedocs.org/user_builds/gpax/envs/latest/lib/python3.9/site-packages/sphinx/config.py", line 354, in eval_config_file
    exec(code, namespace)  # NoQA: S102
  File "/home/docs/checkouts/readthedocs.org/user_builds/gpax/checkouts/latest/docs/source/conf.py", line 28, in <module>
    with open(os.path.join(module_dir, '../../gpax/__version__.py')) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/docs/checkouts/readthedocs.org/user_builds/gpax/checkouts/latest/docs/source/../../gpax/__version__.py'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/docs/checkouts/readthedocs.org/user_builds/gpax/envs/latest/lib/python3.9/site-packages/sphinx/cmd/build.py", line 280, in build_main
    app = Sphinx(args.sourcedir, args.confdir, args.outputdir,
  File "/home/docs/checkouts/readthedocs.org/user_builds/gpax/envs/latest/lib/python3.9/site-packages/sphinx/application.py", line 207, in __init__
    self.config = Config.read(self.confdir, confoverrides or {}, self.tags)
  File "/home/docs/checkouts/readthedocs.org/user_builds/gpax/envs/latest/lib/python3.9/site-packages/sphinx/config.py", line 177, in read
    namespace = eval_config_file(filename, tags)
  File "/home/docs/checkouts/readthedocs.org/user_builds/gpax/envs/latest/lib/python3.9/site-packages/sphinx/config.py", line 367, in eval_config_file
    raise ConfigError(msg % traceback.format_exc()) from exc
sphinx.errors.ConfigError: There is a programmable error in your configuration file:

Traceback (most recent call last):
  File "/home/docs/checkouts/readthedocs.org/user_builds/gpax/envs/latest/lib/python3.9/site-packages/sphinx/config.py", line 354, in eval_config_file
    exec(code, namespace)  # NoQA: S102
  File "/home/docs/checkouts/readthedocs.org/user_builds/gpax/checkouts/latest/docs/source/conf.py", line 28, in <module>
    with open(os.path.join(module_dir, '../../gpax/__version__.py')) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/docs/checkouts/readthedocs.org/user_builds/gpax/checkouts/latest/docs/source/../../gpax/__version__.py'


Configuration error:
There is a programmable error in your configuration file:

Traceback (most recent call last):
  File "/home/docs/checkouts/readthedocs.org/user_builds/gpax/envs/latest/lib/python3.9/site-packages/sphinx/config.py", line 354, in eval_config_file
    exec(code, namespace)  # NoQA: S102
  File "/home/docs/checkouts/readthedocs.org/user_builds/gpax/checkouts/latest/docs/source/conf.py", line 28, in <module>
    with open(os.path.join(module_dir, '../../gpax/__version__.py')) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/docs/checkouts/readthedocs.org/user_builds/gpax/checkouts/latest/docs/source/../../gpax/__version__.py'

Should be an easy fix.

Move `priors` out of utils

Given the important role that the specification of prior distributions has in GPax, it make sense to move them out of the utils into a separate module.

Remove requirements.txt?

I think requirements.txt is deprecated now, and pip install -r requirements.txt can be completely replaced by bash scripts.install.sh, which reads directly from the pyproject.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.