Coder Social home page Coder Social logo

llnl / muygpys Goto Github PK

View Code? Open in Web Editor NEW
23.0 5.0 11.0 1.78 MB

A fast, pure python implementation of the MuyGPs Gaussian process realization and training algorithm.

License: Other

Python 94.33% Shell 0.14% Jupyter Notebook 5.54%
python scientific-computing machine-learning math-physics

muygpys's Introduction

Develop test Documentation Status

Fast implementation of the MuyGPs scalable Gaussian process algorithm

MuyGPs is a scalable approximate Gaussian process (GP) model that affords fast prediction and hyperparameter optimization while retaining high-quality predictions and uncertainty quantifiction. MuyGPs achieves best-in-class speed and scalability by limiting inference to the information contained in k nearest neighborhoods for prediction locations for both hyperparameter optimization and tuning. This feature affords leave-one-out cross-validation optimizating a regularized loss function to optimize hyperparameters, as opposed to the more expensive likelihood evaluations required by similar sparse methods.

Tutorials and Examples

Automatically-generated documentation can be found at readthedocs.io.

Our documentation includes several jupyter notebook tutorials at docs/examples. These tutorials are also include in the online documentation.

See in particular the univariate regression tutorial for a step-by-step introduction to the use of MuyGPyS. See also the regression api tutorial describing how to coalesce the same simple workflow into a one-line call. A deep kernel model inserting a MuyGPs layer into a PyTorch neural network can be found in the torch tutorial.

Backend Math Implementation Options

As of release v0.6.6, MuyGPyS supports four distinct backend implementations of all of its underlying math functions:

  • numpy - basic numpy (the default)
  • JAX - GPU acceleration
  • PyTorch - GPU acceleration and neural network integration
  • MPI - distributed memory acceleration

It is possible to include the dependencies of any, all, or none of these backends at install time. Please see the below installation instructions.

MuyGPyS uses the MUYGPYS_BACKEND environment variable to determine which backend to use import time. It is also possible to manipulate MuyGPyS.config to switch between backends programmatically. This is not advisable unless the user knows exactly what they are doing.

MuyGPyS will default to the numpy backend. It is possible to switch back ends by manipulating the MUYGPYS_BACKEND environment variable in your shell, e.g.

$ export MUYGPYS_BACKEND=jax    # turn on JAX backend
$ export MUYGPYS_BACKEND=torch  # turn on Torch backend
$ export MUYGPYS_BACKEND=mpi    # turn on MPI backend

Just-In-Time Compilation with JAX

MuyGPyS supports just-in-time compilation of the underlying math functions to CPU or GPU using JAX since version v0.5.0. The JAX-compiled versions of the code are significantly faster than numpy, especially on GPUs. In order to use the MuyGPyS torch backend, run the following command in your shell environment.

$ export MUYGPYS_BACKEND=jax

Distributed memory support with MPI

The MPI version of MuyGPyS performs all tensor manipulation in distributed memory. The tensor creation functions will in fact create and distribute a chunk of each tensor to each MPI rank. This data and subsequent data such as posterior means and variances remains partitioned, and most operations are embarassingly parallel. Global operations such as loss function computation make use of MPI collectives like allreduce. If the user needs to reason about all products of an experiment, such the full posterior distribution in local memory, it is necessary to employ a collective such as MPI.gather.

The wrapped KNN algorithms are not distributed, and so MuyGPyS does not yet have an internal distributed KNN implementation. Future versions will support a distributed memory approximate KNN solution.

The user can run a script myscript.py with MPI using, e.g. mpirun (or srun if using slurm) via

$ export MUYGPYS_BACKEND=mpi
$ # mpirun version
$ mpirun -n 4 python myscript.py
$ # srun version
$ srun -N 1 --tasks-per-node 4 -p pbatch python myscript.py

PyTorch Integration

The torch version of MuyGPyS allows for construction and training of complex kernels, e.g., convolutional neural network kernels. All low-level math is done on torch.Tensor objects. Due to PyTorch's lack of support for the Bessel function of the second kind, we only support special cases of the Matern kernel, in particular when the smoothness parameter is $\nu = 1/2, 3/2,$ or $5/2$. The RBF kernel is supported as the Matern kernel with $\nu = \infty$.

The MuyGPyS framework is implemented as a custom PyTorch layer. In the high-level API found in examples/muygps_torch, a PyTorch MuyGPs model is assumed to have two components: a model.embedding which deforms the original feature data, and a model.GP_layer which does Gaussian Process regression on the deformed feature space. A code example is provided below.

Most users will want to use the MuyGPyS.torch.muygps_layer module to construct a custom MuyGPs model. The model can then be calibrated using a standard PyTorch training loop. An example of the approach based on the low-level API is provided in docs/examples/torch_tutorial.ipynb.

In order to use the MuyGPyS torch backend, run the following command in your shell environment.

$ export MUYGPYS_BACKEND=torch

One can also use the following workflow to programmatically set the backend to torch, although the environment variable method is preferred.

from MuyGPyS import config
MuyGPyS.config.update("muygpys_backend","torch")

...subsequent imports from MuyGPyS

Precision

JAX and torch use 32 bit types by default, whereas numpy tends to promote everything to 64 bits. For highly stable operations like matrix multiplication, this difference in precision tends to result in a roughly 1e-8 disagreement between 64 bit and 32 bit implementations. However, MuyGPyS depends upon matrix-vector solves, which can result in disagreements up to 1e-2. Hence, MuyGPyS forces all back end implementations to use 64 bit types by default.

However, the 64 bit operations are slightly slower than their 32 bit counterparts. MuyGPyS accordingly supports 32 bit types, but this feature is experimental and might have sharp edges. For example, MuyGPyS might throw errors or otherwise behave strangely if the user passes arrays of 64 bit types while in 32 bit mode. Be sure to set your data types appropriately.

A user can have MuyGPySuse 32 bit types by setting the MUYGPYS_FTYPE environment variable to "32", e.g.

$ export MUYGPYS_FTYPE=32  # use 32 bit types in MuyGPyS functions

It is also possible to manipulate MuyGPyS.config to switch between types programmatically. This is not advisable unless the user knows exactly what they are doing.

Installation

Pip: CPU

The index muygpys is maintained on PyPI and can be installed using pip. muygpys supports many optional extras flags, which will install additional dependencies if specified. If installing CPU-only with pip, you might want to consider the following flags:
These extras include:

  • hnswlib - install hnswlib dependency to support fast approximate nearest neighbors indexing
  • jax_cpu - install JAX dependencies to support just-in-time compilation of math functions on CPU (see below to install on GPU CUDA architectures)
  • torch - install PyTorch dependencies to employ GPU acceleration and the use of the MuyGPyS.torch submodule
  • mpi - install MPI dependencies to support distributed memory parallel computation. Requires that the user has installed a version of MPI such as mvapich or open-mpi.
$ # numpy-only installation. Functions will internally use numpy.
$ pip install --upgrade muygpys
$ # The same, but includes hnswlib.
$ pip install --upgrade muygpys[hnswlib]
$ # CPU-only JAX installation. Functions will be jit-compiled using JAX.
$ pip install --upgrade muygpys[jax_cpu]
$ # The same, but includes hnswlib.
$ pip install --upgrade muygpys[jax_cpu,hnswlib]
$ # MPI installation. Functions will operate in distributed memory.
$ pip install --upgrade muygpys[mpi]
$ # The same, but includes hnswlib.
$ pip install --upgrade muygpys[mpi,hnswlib]
$ # pytorch installation. MuyGPyS.torch will be usable.
$ pip install --upgrade muygpys[torch]

Pip: GPU (CUDA)

JAX GPU Instructions

JAX also supports just-in-time compilation to CUDA, making the compiled math functions within MuyGPyS runnable on NVidia GPUS. This requires you to install CUDA and CuDNN in your environment, if they are not already installed, and to ensure that they are on your environment's $LD_LIBRARY_PATH. See scripts for an example environment setup.

MuyGPyS no longer supports automated GPU-supported JAX installation using pip extras. To install JAX as a dependency for MuyGPyS to be deployed on cuda-capable GPUs, please read and follow the JAX installation instructions. After installing JAX, the user will also need to install Tensorflow Probability with a JAX backend via

pip install tensorflow-probability[jax]>=0.16.0

PyTorch GPU Instructions

MuyGPyS does not and most likely will not support installing CUDA PyTorch with an extras flag. Please install PyTorch separately.

From Source

This repository includes several extras_require optional dependencies.

  • tests - install dependencies necessary to run tests
  • docs - install dependencies necessary to build the docs
  • dev - install dependencies for maintaining code style, running performance benchmarks, linting, and packaging (includes all of the dependencies in tests and docs).

For example, follow these instructions to install from source for development purposes with JAX support:

$ git clone [email protected]:LLNL/MuyGPyS.git
$ cd MuyGPyS
$ pip install -e .[dev,jax_cpu]

If you would like to perform a GPU installation from source, you will need to install the jax dependency directly instead of using the jax_cuda flag or similar.

Additionally check out the develop branch to access the latest features in between stable releases. See CONTRIBUTING.md for contribution rules.

Full list of extras flags

  • hnswlib - install hnswlib dependency to support fast approximate nearest neighbors indexing
  • jax_cpu - install JAX dependencies to support just-in-time compilation of math functions on CPU (see below to install on GPU CUDA architectures)
  • torch - install PyTorch
  • mpi - install MPI dependency to support parallel computation
  • tests - install dependencies necessary to run tests
  • docs - install dependencies necessary to build the docs
  • dev - install dependencies for maintaining code style, linting, and packaging (includes all of the dependencies in tests and docs)

Building Docs

In order to build the docs locally, first pip install from source using either the docs or dev options and then execute:

$ sphinx-build -b html docs docs/_build/html

Finally, open the file docs/_build/html/index.html in your browser of choice.

Testing

In order to run tests locally, first pip install MuyGPyS from source using either the dev or tests options. All tests in the test/ directory are then runnable as python scripts, e.g.

$ python tests/kernels.py

Individual absl unit test classes can be run in isolation, e.g.

$ python tests/kernels.py DifferencesTest

The user can run most tests in all backends. Some tests use backend-dependent features, and will fail with informative error messages when attempting an unsupported backend. The user need only set MUYGPYS_BACKEND prior to running the desired test, e.g.,

$ export MUYGPYS_BACKEND=jax
$ python tests/kernels.py

If the MPI dependencies are installed, the user can also run absl tests using MPI, e.g. using mpirun

$ export MUYGPYS_BACKEND=mpi
$ mpirun -n 4 python tests/kernels.py

or using srun

$ export MUYGPYS_BACKEND=mpi
$ srun -N 1 --tasks-per-node 4 -p pdebug python tests/kernels.py

About

Authors

  • Benjamin W. Priest (priest2 at llnl dot gov)
  • Amanda L. Muyskens (muyskens1 at llnl dot gov)
  • Alec M. Dunton (dunton1 at llnl dot gov)
  • Imène Goumiri (goumiri1 at llnl dot gov)

Papers

MuyGPyS has been used the in the following papers (newest first):

  1. Scalable Gaussian Process Hyperparameter Optimization via Coverage Regularization
  2. Light Curve Completion and Forecasting Using Fast and Scalable Gaussian Processes (MuyGPs)
  3. Fast Gaussian Process Posterior Mean Prediction via Local Cross Validation and Precomputation
  4. Gaussian Process Classification fo Galaxy Blend Identification in LSST
  5. Star-Galaxy Image Separation with Computationally Efficient Gaussian Process Classification
  6. Star-Galaxy Separation via Gaussian Processes with Model Reduction

Citation

If you use MuyGPyS in a research paper, please reference our article:

@article{muygps2021,
  title={MuyGPs: Scalable Gaussian Process Hyperparameter Estimation Using Local Cross-Validation},
  author={Muyskens, Amanda and Priest, Benjamin W. and Goumiri, Im{\`e}ne and 
  Schneider, Michael},
  journal={arXiv preprint arXiv:2104.14581},
  year={2021}
}

License

MuyGPyS is distributed under the terms of the MIT license. All new contributions must be made under the MIT license.

See LICENSE-MIT, NOTICE, and COPYRIGHT for details.

SPDX-License-Identifier: MIT

Release

LLNL-CODE-824804

muygpys's People

Contributors

akilandrews avatar alecmdunton avatar bwpriest avatar igoumiri avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

muygpys's Issues

Need to move all fast kernel interpolation tests into their own file(s)

For the foreseeable future the fast kernel interpolation workflow will not support MPI. However, relevant functions such as MuyGPyS.gp.distance.make_fast_regress_tensors and MuyGPyS.gp.muygps.MuyGPS.fast_regress are tested in the same files as other core functions that do support MPI. This means that those files cannot be tested in MPI mode without throwing errors. Accordingly, we need to move all of these functions into their own test scripts so that the in-dept versionwise CI can function properly.

Refactor backend tests so that there are numpy and (jax/torch) versions of the `MuyGPS` objects

The jax and torch correctness tests currently create singular MuyGPS objects and use them to create the objective functions for optimization. However, now that we are using HeteroscedasticNoise objects with nontrivial tensor internals, it matters that we create different kwargs like

cls.k_kwargs_heteroscedastic_n = {
    ...
    "eps": HeteroscedasticNoise(cls.eps_heteroscedastic_n),
}
cls.k_kwargs_heteroscedastic_j = {
    ...
    "eps": HeteroscedasticNoise(cls.eps_heteroscedastic_j),
}

and then create different MuyGPS objects like

cls.muygps_heteroscedastic_n = MuyGPS(**cls.k_kwargs_heteroscedastic_n)
cls.muygps_heteroscedastic_j = MuyGPS(**cls.k_kwargs_heteroscedastic_j)

Need to support sigma_sq optimization in PyTorch

Torch doesn't like it when we try to use, e.g., muygps_sigma_sq_optim on a MuyGPS object because of an issue with deepcopy. Will need to figure out a way to support computation of sigma_sq in MuyGPyS.torch.muygps_layer that doesn't require calling anything in MuyGPyS._src.

Need torch MuyGPyS layer to support 64 bit optimization

We currently need to use $ export MUYGPYS_FTYPE=32 for MuyGPyS.torch.muygpys_layer to perform correctly during optimization. This is because .float() is hardcoded therein. We need to modify this behavior so that it depends on mm.ftype.

Major refactor to `sigma_sq` incoming

Thus far, we have been constructing kernels of the form

$$ \sigma^2 (K + \tau^2 I_n) $$

and optimizing $\sigma^2$ with a closed-form equation. However, with the leave-one-out-likelihood we can now effectively optimize $\sigma^2$ directly. This will mean casting it as a ScalarHyperparameter and hooking it into the optimization chassis like the other hyperparameters. It will also allow us to bring our kernel model into the following more standard formulation

$$ \sigma^2 K + \tau^2I_n. $$

I believe that it will not be worthwhile to simultaneously maintain the old way of doing things alongside the new method, since the leave-one-out-likelihood is vastly superior to mean squared error as a loss function. However, we need to demonstrate that the new formulation is performant and sensitive to $\sigma^2$ before we can incorporate changes into the code. Assuming that all of this is successful, we may want to deprecate mse_fn and cross_entropy_fn in favor of loss functions like lool_fn that directly regulate the variance with coverage or similar.

`fast_posterior_mean` needs to be rephrased as a composition

We need the fast posterior mean to be rewritten as a composition, similar to how the posterior mean function works. This should be in the form of a functor class similar in form to PosteriorMean less the optimization members (since the fast mean is not involved in optimization).

docs/examples/fast_regression_tutorial.ipynb is broken

This probably was caused by something that I did. However, I looked at the notebook more closely and I think that it needs a facelift anyway. It is mean to be read and understood, which I do not think a non-expert can do in its current state. We need to clean it up, fix the problem, and add a lot more markdown.

Need to refactor `tests/backend` tests to depend upon model choices

Right now, we need to combinatorially make tests for each combination of model choices (e.g. DistortionModel and NoiseModel). It would be preferable to instead make a single test interface that takes these choices as arguments. This might be complicated. The pseudo code would look something like

class ModelChassis(OtherSuperclass):
    @classmethod
    def setUpClass(cls):
        super(OtherSuperClass, cls).setUpClass()

    def __init__(self, noise_model, distortion_model, noise_args, distortion_args):
        # Set up members

    def foo_test(self, *args, **kwargs):
        # do absl testing of specific members

class AllModelTests(SuperClass):
    @classmethod
    def setUpClass(cls):
        super(SuperClass, cls).setUpClass()
        cls.homoscedastic_isotropic_model_tests = ModelChassis(
            HomoscedasticNoise, IsotropicDistortion, [additional_arguments]
        )
        cls.heteroscedastic_isotropic_model_tests = ModelChassis(
            HeteroscedasticNoise, IsotropicDistortion, [additional_arguments]
        )
        cls.homoscedastic_anisotropic_model_tests = ModelChassis(
            HomoscedasticNoise, AnisotropicDistortion, [additional_arguments]
        )
        cls.heteroscedastic_anisotropic_model_tests = ModelChassis(
            HeteroscedasticNoise, AnisotropicDistortion, [additional_arguments]
        )

    def foo_test(self, *args, **kwargs):
        self.homoscedastic_isotropic_model_tests.foo_test(*args, **kwargs)
        self.heteroscedastic_isotropic_model_tests.foo_test(*args, **kwargs)
        self.homoscedastic_anisotropic_model_tests.foo_test(*args, **kwargs)
        self.heteroscedastic_anisotropic_model_tests.foo_test(*args, **kwargs)

    ...

We might be able to figure out a single, implementation-independent chassis for doing this for all backend tests, but that also might not be possible due to the differences between the implementations.

Implemention type should be a unified config variable

Instead of several boolean flags muygpys_jax_enabled, muygpys_mpi_enabled, muygpys_pytorch_enabled, etc, we should use a single muygpys_backend variable that specifies which backend to use. Will need to update the _src infrastructure, as well as the CI and docs.

Need to simplify `MuyGPyS.examples`

As the library has matured, we have moved away from relying upon one-line interfaces. In the next release we should crystallize MuyGPyS.examples into single-use functions that are meant to be read and used as tutorials, but not actually directly used in anger by data scientists. This will involve removing MuyGPyS.examples.from_indices, moving the make_*_regressor functions into MuyGPyS._test, and removing support from all of the workflows on MultivariateMuyGPS. We might want to add a separate MultivariateMuyGPS-based example, once that class is "finished".

numpy parallelism is slow

The solve operations actually speed up when numpy is limited to use only one core, i.e.

import os
os.environ["OMP_NUM_THREADS"] = "1"
import numpy as np

It is not obvious why this is the case, nor how to fix it.

estimating sigma_sq using an rbf kernel appears to be broken

Trying to compute sigma_sq via MuyGPS.get_sigma_optim and get_analytic_sigma (for BenchmarkGP) appears to not be working, and will produce wrong results when trying to find a known sigma_sq value. See the test cases GPSigmaSqBaselineTest and GPSigmaSqOptimTest inside of tests/optimize.py. Using "l2" rather than "F2" to create the pairwise distances appears to solve the issue in the latter case, but I am at a loss as to why.

`lool` loss can be negative. Is this intended?

The lool loss is implemented as

$$ \sum_{i \in B} \log (\sigma^2_i(\theta)) + \frac{(Y(x_i) - \mu_i(\theta))^2}{\sigma^2_i(\theta)} $$

according to the paper, where $\sigma^2_i(\theta)$ is the posterior variance evaluated on the batch element $i$. However, this seems like it might be causing an issue that trips up the optimizer. If the posterior variance is $\ll 1.0$, then the $\log (\sigma^2_i(\theta))$ term can be (very) negative. Is this intended? It can cause the resultant obj_fn to have the opposite sign of what the code expects. I am not sure if this is a bug or not.

Missing requirement for jax

Ran into this when trying to use jax on pascal:

...
  File "/g/g90/goumiri1/src/monetgrams/venv/lib/python3.9/site-packages/MuyGPyS/_src/gp/kernels/jax.py", line 8, in <module>
    from tensorflow_probability.substrates import jax as tfp
ModuleNotFoundError: No module named 'tensorflow_probability'

Running pip install tensorflow_probability fixed it.

loss method propagation is ungainly

right now, when adding a loss method we need to modify a bunch of helper functions, which is brittle and bug-prone. We need to reengineer the loss function harness such that the loss functions in MuyGPyS.optimize.loss are functions that carry the details of their optimization function behavior with them.

need overhaul of `tests/optimize.py`

I pretty badly broke tests/optimize.py back in March, and left it incomplete. It needs to be refactored and have tests added back in verifying that we can find all of the parameters that are sensitive to optimization. The issue is that I tried to refactor the tests so that curves are sampled only once (the slow part), but it seems that somewhere I made a bad assumption, so it will most likely be fastest to rewrite the whole sampling boilerplate from the ground up.

library needs to be more functional

I have come to the conclusion that MuyGPyS has too many procedural conditionals to be maintainable. We need to refactor much of the library to functionally construct MuyGPs processes in a way that is set at object creation time. In particular, we need to break MuyGPS.regress into MuyGPS.posterior_mean() and MuyGPS.posterior_variance(). Rather than writing these methods as normal member functions, they should be defined at object creation time based upon model choices - e.g. homo/heterscedasticity, variance form, etc. There are other examples that I will document in this thread as they arise.

Need to change assertEqual calls to _check_ndarray calls

When we make a call in the testing harness such as self.assertEqual(kern.shape, (train_count, nn_count, nn_count)), we really want to make a call like _check_ndarray(self.assertEqual, kern, mm.ftype, (test_count, nn_count, nn_count)).

Need heteroscedastic noise (`eps`) parameter

We need to support vector-valued eps parameters. Similar to MuyGPyS.gp.kernel.SigmaSq, we should break eps out of MuyGPyS.gp.kernel.Hyperparameter and make it its own class. We probably only want to support learning the epsilon parameter in the homoscedastic case, though, so we will need to maintain some guardrails to make sure that it is treated as a scalar where appropriate and as a vector where appropriate.

Anisotropic Modeling

We need to add a feature that allows for anisotropic modeling. This will involve changing the optimization chassis and creating functionality that takes individual distance tensors, weights them, and produces the distance tensor for the full dataset.

Refactor `__init__` methods throughout

Right now we are initializing lower level classes (KernelFns and Hyperparameters) inside of MuyGPS. However, I believe that we want to modify this so that the user creates the lower lever classes directly, and passes them to MuyGPS. This will serve to both make the API more readable and make the code internals more maintainable.

The proposed change would move the current creation logic from

k_kwargs = {
    "kern": "matern",
    "metric": "l2",
    "eps": {"val": 1e-5},
    "nu": {"val": "log_sample", "bounds": (0.1, 5.0)},
    "length_scale": {"val": 1.0},
}
muygps = MuyGPS(**k_kwargs)

to something like

muygps = MuyGPS(
    Matern(
        disortion=Isotropic("l2"), 
        nu=Hyperparameter("log_sample", (0.1, 5.0))
        length_scale=Hyperparameter(1.0),
    ),
    eps=HomoscedasticNoise(1e-5),
    sigma_sq=SigmaSq(1.0),
)

Should refactor library internals to be "ifless"

Presently, brittle conditionals still proliferate in the codebase. We need to eventually remove all of this stuff with proper object-oriented design and inheritance to avoid yucky updates when we have to add features.

Need documentation overhaul

the addition of new classes and packages means that we need to update the sphinx documentation chassis prior to release.

fast regression tutorial is a mess and needs a rewrite

What it says on the tin. The fast regression tutorial is not readable in its current state. I think we can simplify it by removing optimization, as it is not relevant to the tutorial. The purpose is to demonstrate the improved speed and low loss of posterior mean accuracy afforded by the fast regression workflow.

optimization functions need an elegant copy method for creating the returned model

We are currently using deepcopy, which is unnecessarily heavy (especially once we start using heteroscedastic noise). What we need is a copy method that does something like

def copy_model(model: MuyGPS) -> MuyGPS:
    ret = MuyGPS()
    # for all fixed parameters of model, set ret's corresponding references to those objects (no copying)
    # for all free parameters of model, copy ret's corresponding members to those objects (copying)

Need to move hyperparameter handling logic into its own class

This class should also be what is returned by optimization functions, rather than MuyGPS objects. We will then need a subsequent function that uses the hyperparameter wrapper and optional kwargs (e.g. a new heteroscedastic noise tensor) to create a new MuyGPS class. This should streamline and harden the optimization logic, avoid the optimization deepcopies (issue #105), and provide a unified interface to all workflows.

Need to move `length_scale` parameter inside of the distortion model and out of the kernel functions

This will make it possible to hold multiple dimension-wise length_scale parameters inside of AnisotropicDistortion without conflicting with the length_scale parameter that is currently inside of the kernel functions. It will be necessary to add get_optim_params and get_opt_fn methods to the distortion models and to hook them into those of Matern and RBF. It will probably make the most sense to add extracting the distortion parameters to the base KernelFn and to have the higher-level functions extend those functions (e.g. Matern will need to add nu).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.