Coder Social home page Coder Social logo

gpytorch's People

Contributors

adamjstewart avatar andrewgordonwilson avatar austereantelope avatar balandat avatar bcjuan avatar bramsw avatar darbour avatar dme65 avatar docusaurus-bot avatar douglas-boubert avatar gpleiss avatar gpleiss-asapp avatar jacobrgardner avatar jahall avatar keawang avatar martinjankowiak avatar mshvartsman avatar ninelk avatar partev avatar philippthoelke avatar rajkumarkarthik avatar rhaps0dy avatar saitcakmak avatar samuelstanton avatar sdaulton avatar vishwakftw avatar wecacuee avatar wjmaddox avatar wrh14 avatar zitongzhou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gpytorch's Issues

gradients didn't backward to the end

I don't have much experience with the Gaussian process and I found this repo very helpful! I met one problem about the GPModel module.

When the input of the Gaussian process model comes from some embedding layers (which are part of the model), the embedding layers' weights have no gradients after the loss backward.

Here's an example.

import gpytorch
import torch
from gpytorch.kernels import RBFKernel
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.means import ConstantMean
from gpytorch.random_variables import GaussianRandomVariable
from torch import nn
from torch.autograd import Variable


class LatentFunction(gpytorch.AdditiveGridInducingPointModule):
    def __init__(self):
        super(LatentFunction, self).__init__(grid_size=100, grid_bounds=[(-10, 10)], n_components=2)
        self.mean_module = ConstantMean(constant_bounds=[-1e-5, 1e-5])
        self.covar_module = RBFKernel(log_lengthscale_bounds=(-5, 6))
        self.register_parameter('log_outputscale', nn.Parameter(torch.Tensor([0])), bounds=(-5, 6))

    def forward(self, x):
        mean_x = self.mean_module(x)
        covar_x = self.covar_module(x)
        covar_x = covar_x.mul(self.log_outputscale.exp())
        latent_pred = GaussianRandomVariable(mean_x, covar_x)
        return latent_pred


class GPRegressionModel(gpytorch.GPModel):
    def __init__(self):
        super(GPRegressionModel, self).__init__(GaussianLikelihood())
        self.latent_function = LatentFunction()

    def forward(self, x):
        return self.latent_function(x)


if __name__ == '__main__':
    n = 10
    embs = nn.Embedding(10, 2)
    train_x = (torch.rand(n) * 10).type(torch.LongTensor)
    train_x = Variable(train_x)
    train_x = embs(train_x)
    train_y = (train_x.data[:, :1] - train_x.data[:, 1:]).norm(p=2, dim=1)
    train_y = Variable(train_y)

    model = GPRegressionModel()
    output = model(train_x)
    loss = -model.marginal_log_likelihood(output, train_y)
    loss.backward()
    print('Embedding has no gradients?', embs.weight.grad is None)

    if train_x.grad:
        train_x.grad.zero_()
    model = nn.Linear(2, 1)
    output = model(train_x)
    loss = nn.functional.mse_loss(output, train_y)
    loss.backward()

    print('Linear has no gradients?', embs.weight.grad is None, '\tnorm:', embs.weight.grad.norm().data[0])

The output is

Embedding has no gradients? True
Linear has no gradients? False
norm: 1.098832130432129

I noticed it extends the "torch.nn.Module" class so I expected it has some similar properties like other "torch.nn.Module" subclasses (such as Linear Layer or Conv Layer). But looks like it doesn't.

I don't know if it's the Gaussian process's property or some "bugs". Correct me if there are some trivial mistakes.

Many thanks!

Multi dimension Target

Hi,
Thank you for amazing Repo. Is there an example which shows how to work with given code when the input and Target has dimensions like [NxM] where N is features and M number of samples. I checked the ones with multiple dimension inputs but it still has only one dimensional target.

Best,
Monica.

Inverse & Root for KroneckerLazyVar

For KroneckerLazyVariable, which you can easily get exact inverse and root, why you still go for conjugate gradient & lanczos ? These seems be slower to me cuz the true size of matrix is really big, by small steps of iterations, it might be far from a good solution.

Test failure in test_function_factory.py

The offending line is https://github.com/cornellius-gp/gpytorch/blob/master/test/util/test_function_factory.py#L243

The last .dot operation errors with

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-28-1e6c865d4b6a> in <module>()
----> 1 t.test_inv_quad_log_det_many_vectors()

<ipython-input-25-045795d3c376> in test_inv_quad_log_det_many_vectors(self)
     24     def test_inv_quad_log_det_many_vectors(self):
     25         # Forward pass
---> 26         actual_inv_quad = self.mat_var_clone.inverse().matmul(self.vecs_var_clone).dot(self.vecs_var_clone)
     27         with gpytorch.settings.num_trace_samples(1000):
     28             nlv = NonLazyVariable(self.mat_var)

RuntimeError: Expected argument self to have 1 dimension, but has 2

I'm not entirely sure what this code does, so I'll leave this for @gpleiss to fix.

Not sure if that happens on stable pytorch, but it definitely happens on pytorch master. Given that docs back to 0.3.0 state that .dot does not broadcast, I'm assuming this is unrelated though.

Fix seed for tests running on CI

A few of the tests fail intermittently. For the most part, I don't think that these failures represent any big errors or numerical instabilities.

I propose locking down the seed for the tests (at least only on CI, for now). This way we'll actually listen to TravisCI when it breaks.

cc/ @jrg365

Minimal Documentation

Hi,

thanks for releasing this repository - it's really cool ๐Ÿ‘

I was just wondering if you would be providing any minimal documentation? The examples are really nice, some extra comments though we be very helpful, especially for folks new to GPs.

Maybe easier for you, perhaps references to tutorials/other learning materials/video lectures? I'm guessing a lot of newbies, will be drawn to this repo :)

Thanks.

RBF Kernel Change Breaks Testing Code

The change to RBFKernel in 84fccd8 may break something about our prediction code.

I am not totally sure what the problem is yet, but I isolated this as the problem with git bisect and have a reasonable test case where results are significantly worse with the commit in compared to after a revert commit.

It seems like the stability issues we encountered when making this change in the past don't come up in the unit tests, but do on some real datasets.

I can try to push my test case to a branch as well, although it relies on a UCI dataset.

@Balandat @gpleiss

fftw linking issue

Hi,

Here's the error I get when importing gpytorch:

/usr/local/lib/python3.5/dist-packages/gpytorch-0.1-py3.5-linux-x86_64.egg/gpytorch/libfft/__init__.py in <module>()
      1 
      2 from torch.utils.ffi import _wrap_function
----> 3 from ._libfft import lib as _lib, ffi as _ffi
      4 
      5 __all__ = []

ImportError: /usr/local/lib/python3.5/dist-packages/gpytorch-0.1-py3.5-linux-x86_64.egg/gpytorch/libfft/_libfft.abi3.so: undefined symbol: fftwf_plan_many_dft_r2c

ldd on _libfft.abi3.so (produced by build.py) gives:

$ ldd /usr/local/lib/python3.5/dist-packages/gpytorch-0.1-py3.5-linux-x86_64.egg/gpytorch/libfft/_libfft.abi3.so
	linux-vdso.so.1 =>  (0x00007ffcba739000)
	libcufft.so.8.0 => /usr/local/cuda-8.0/targets/x86_64-linux/lib/libcufft.so.8.0 (0x00007f34143cc000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f34141af000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f3413de4000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f3413be0000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f34138d7000)
	librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f34136ce000)
	libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f341334c000)
	libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f3413136000)
	/lib64/ld-linux-x86-64.so.2 (0x000055835712f000)

which shows that is wasn't properly linked to libfftw3f.so which has the missing symbol. However I think I added the proper paths (and libraries) to the build.py script:

diff --git a/build.py b/build.py
index fe1ca9d..aabe4f5 100644
--- a/build.py
+++ b/build.py
@@ -6,8 +6,9 @@ headers = ['gpytorch/csrc/fft.h']
 sources = ['gpytorch/csrc/fft.c']
 defines = []
 with_cuda = False
-libraries = ['fftw3']
-library_dirs = ['/usr/local/lib']
+libraries = ['fftw3', 'fftw3f']
+library_dirs = ['/usr/local/lib', '/usr/lib/x86_64-linux-gnu/']
+runtime_library_dirs = ['/usr/lib/x86_64-linux-gnu/']
 
 if torch.cuda.is_available():
     cuda_home = os.getenv('CUDA_HOME') or '/usr/local/cuda'
@@ -32,4 +33,5 @@ ffi = create_extension(
     with_cuda=with_cuda,
     package=True,
     relative_to=__file__,
+    runtime_library_dirs=runtime_library_dirs
 )

I'm on ubuntu 16.04, with cuda and an up-to-date pytorch installation. Any idea on how I can fix this ?

Thanks !
Simon

Migrate RandomVariable to use PyTorch distributions

GPyTorch RandomVariables are essentially doing the same thing as PyTorch Distributions. We should try to leverage the PyTorch interface as much as possible.

Similarly to Pyro, we probably want to subclass each of the distributions (or at least subclassing the MultivariateNormal distribution). This will make it possible to work with LazyVariable covariances, and for us to do our custom fast sampling code.

Related to #123

Get rid of Variable

In pytorch 0.4, variables are just a legacy concept. Instead, tensors now take the requires_grad argument. Since master is now 0.4+, we should clean things up and replace all occurrences of Variable in the code with the appropriate Tensor equivalent.

kernels/spectral_mixture_kernel.py

Line 63: exp_term = (distance * mixture_scales).pow_(2).mul_(-2 * math.pi ** 2)

According to Wilson 2013, the mixtures scales are not squared. Am I missing something here?

covariance matrix and matrix multiplication

Hi,

I'm interesting in implement my own covariance matrix. I don't understand the forward step yet. But my covariance matrix have the form:

K(i,j) = sum_k sum_ l W(k,i) K_{base}(k,l) W(l,j)

If you want, you can write this as a matrix multiplication:

K = W^T K_{base} W
I have stored the matrix W in a numpy array.
Is it some way to implement this in gpytorch?

Regards,
Lerko.

import gpytorch error

$ python
Python 3.5.2 (default, Nov 17 2016, 17:05:23)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.

import gpytorch
Traceback (most recent call last):
File "", line 1, in
File "/home/ubuntu/gpytorch-master/gpytorch/init.py", line 3, in
from .lazy import LazyVariable, ToeplitzLazyVariable
File "/home/ubuntu/gpytorch-master/gpytorch/lazy/init.py", line 2, in
from .toeplitz_lazy_variable import ToeplitzLazyVariable
File "/home/ubuntu/gpytorch-master/gpytorch/lazy/toeplitz_lazy_variable.py", line 4, in
from gpytorch.utils import toeplitz
File "/home/ubuntu/gpytorch-master/gpytorch/utils/toeplitz.py", line 2, in
import gpytorch.utils.fft as fft
File "/home/ubuntu/gpytorch-master/gpytorch/utils/fft.py", line 1, in
from .. import libfft
File "/home/ubuntu/gpytorch-master/gpytorch/libfft/init.py", line 3, in
from ._libfft import lib as _lib, ffi as _ffi
ImportError: No module named 'gpytorch.libfft._libfft'

Cannot compute gradient of variance

First of all, this package looks really great.

I'd like to compute the gradient of the variance of a prediction, but this fails with
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation.

To repro (here model is the trained/conditioned model from the simple_gp_regression.ipynb example):

>> test_x = Variable(torch.rand(10), requires_grad=True)
>> output = model(test_x)

# this works just fine
>> sum_of_means = output.mean().sum()
>> sum_of_means.backward()
>> test_x.grad
Variable containing:
 3.4206
-3.2818
 1.8668
 3.5644
-0.7677
 0.7666
 6.4394
 5.1365
-5.0451
 6.0161
[torch.FloatTensor of size 10]

# this fails with said error
>> sum_of_vars = output.var().sum()
>> sum_of_vars.backward()
>> test_x.grad
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-18-dad61ee95fc5> in <module>()
      1 sum_of_vars = output.var().sum()
----> 2 sum_of_vars.backward()
      3 test_x.grad

/data/users/balandat/fbsource/fbcode/buck-out/dev/gen/bento/kernels/bento_kernel_ae_dev#link-tree/torch/autograd/variable.py in backward(self, gradient, retain_graph, create_graph, retain_variables)
    165                 Variable.
    166         """
--> 167         torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
    168 
    169     def register_hook(self, hook):

/data/users/balandat/fbsource/fbcode/buck-out/dev/gen/bento/kernels/bento_kernel_ae_dev#link-tree/torch/autograd/__init__.py in backward(variables, grad_variables, retain_graph, create_graph, retain_variables)
     97 
     98     Variable._execution_engine.run_backward(
---> 99         variables, grad_variables, retain_graph)
    100 
    101 

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

I tried to track down where this happens, but didn't get very far (given the less than informative stack trace).

Computing the gradient of the variance would be really useful for using this package for bayesian optimization.

import gpytorch error

$ sudo python setup.py install
[sudo] password for ubuntu:
running install
running bdist_egg
running egg_info
writing dependency_links to gpytorch.egg-info/dependency_links.txt
writing top-level names to gpytorch.egg-info/top_level.txt
writing requirements to gpytorch.egg-info/requires.txt
writing gpytorch.egg-info/PKG-INFO
reading manifest file 'gpytorch.egg-info/SOURCES.txt'
writing manifest file 'gpytorch.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
copying gpytorch/libfft/init.py -> build/lib.linux-x86_64-3.5/gpytorch/libfft
running build_ext
generating cffi module 'build/temp.linux-x86_64-3.5/gpytorch.libfft._libfft.c'
already up-to-date
creating build/bdist.linux-x86_64/egg
creating build/bdist.linux-x86_64/egg/gpytorch
creating build/bdist.linux-x86_64/egg/gpytorch/means
copying build/lib.linux-x86_64-3.5/gpytorch/means/init.py -> build/bdist.linux-x86_64/egg/gpytorch/means
copying build/lib.linux-x86_64-3.5/gpytorch/means/mean.py -> build/bdist.linux-x86_64/egg/gpytorch/means
copying build/lib.linux-x86_64-3.5/gpytorch/means/constant_mean.py -> build/bdist.linux-x86_64/egg/gpytorch/means
copying build/lib.linux-x86_64-3.5/gpytorch/gp_model.py -> build/bdist.linux-x86_64/egg/gpytorch
copying build/lib.linux-x86_64-3.5/gpytorch/init.py -> build/bdist.linux-x86_64/egg/gpytorch
creating build/bdist.linux-x86_64/egg/gpytorch/random_variables
copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/init.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables
copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/constant_random_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables
copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/independent_random_variables.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables
copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/samples_random_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables
copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/gaussian_random_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables
copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/batch_random_variables.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables
copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/random_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables
copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/categorical_random_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables
copying build/lib.linux-x86_64-3.5/gpytorch/random_variables/bernoulli_random_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/random_variables
creating build/bdist.linux-x86_64/egg/gpytorch/likelihoods
copying build/lib.linux-x86_64-3.5/gpytorch/likelihoods/init.py -> build/bdist.linux-x86_64/egg/gpytorch/likelihoods
copying build/lib.linux-x86_64-3.5/gpytorch/likelihoods/likelihood.py -> build/bdist.linux-x86_64/egg/gpytorch/likelihoods
copying build/lib.linux-x86_64-3.5/gpytorch/likelihoods/gaussian_likelihood.py -> build/bdist.linux-x86_64/egg/gpytorch/likelihoods
copying build/lib.linux-x86_64-3.5/gpytorch/likelihoods/bernoulli_likelihood.py -> build/bdist.linux-x86_64/egg/gpytorch/likelihoods
creating build/bdist.linux-x86_64/egg/gpytorch/lazy
copying build/lib.linux-x86_64-3.5/gpytorch/lazy/init.py -> build/bdist.linux-x86_64/egg/gpytorch/lazy
copying build/lib.linux-x86_64-3.5/gpytorch/lazy/kronecker_product_lazy_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/lazy
copying build/lib.linux-x86_64-3.5/gpytorch/lazy/toeplitz_lazy_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/lazy
copying build/lib.linux-x86_64-3.5/gpytorch/lazy/lazy_variable.py -> build/bdist.linux-x86_64/egg/gpytorch/lazy
copying build/lib.linux-x86_64-3.5/gpytorch/module.py -> build/bdist.linux-x86_64/egg/gpytorch
creating build/bdist.linux-x86_64/egg/gpytorch/inference
copying build/lib.linux-x86_64-3.5/gpytorch/inference/init.py -> build/bdist.linux-x86_64/egg/gpytorch/inference
creating build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models
copying build/lib.linux-x86_64-3.5/gpytorch/inference/posterior_models/init.py -> build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models
copying build/lib.linux-x86_64-3.5/gpytorch/inference/posterior_models/gp_posterior.py -> build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models
copying build/lib.linux-x86_64-3.5/gpytorch/inference/posterior_models/exact_gp_posterior.py -> build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models
copying build/lib.linux-x86_64-3.5/gpytorch/inference/posterior_models/variational_gp_posterior.py -> build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models
copying build/lib.linux-x86_64-3.5/gpytorch/inference/inference.py -> build/bdist.linux-x86_64/egg/gpytorch/inference
creating build/bdist.linux-x86_64/egg/gpytorch/functions
copying build/lib.linux-x86_64-3.5/gpytorch/functions/init.py -> build/bdist.linux-x86_64/egg/gpytorch/functions
copying build/lib.linux-x86_64-3.5/gpytorch/functions/log_normal_cdf.py -> build/bdist.linux-x86_64/egg/gpytorch/functions
copying build/lib.linux-x86_64-3.5/gpytorch/functions/normal_cdf.py -> build/bdist.linux-x86_64/egg/gpytorch/functions
copying build/lib.linux-x86_64-3.5/gpytorch/functions/dsmm.py -> build/bdist.linux-x86_64/egg/gpytorch/functions
copying build/lib.linux-x86_64-3.5/gpytorch/functions/add_diag.py -> build/bdist.linux-x86_64/egg/gpytorch/functions
creating build/bdist.linux-x86_64/egg/gpytorch/utils
copying build/lib.linux-x86_64-3.5/gpytorch/utils/toeplitz.py -> build/bdist.linux-x86_64/egg/gpytorch/utils
copying build/lib.linux-x86_64-3.5/gpytorch/utils/interpolation.py -> build/bdist.linux-x86_64/egg/gpytorch/utils
copying build/lib.linux-x86_64-3.5/gpytorch/utils/init.py -> build/bdist.linux-x86_64/egg/gpytorch/utils
copying build/lib.linux-x86_64-3.5/gpytorch/utils/lincg.py -> build/bdist.linux-x86_64/egg/gpytorch/utils
copying build/lib.linux-x86_64-3.5/gpytorch/utils/fft.py -> build/bdist.linux-x86_64/egg/gpytorch/utils
copying build/lib.linux-x86_64-3.5/gpytorch/utils/lanczos_quadrature.py -> build/bdist.linux-x86_64/egg/gpytorch/utils
copying build/lib.linux-x86_64-3.5/gpytorch/utils/function_factory.py -> build/bdist.linux-x86_64/egg/gpytorch/utils
copying build/lib.linux-x86_64-3.5/gpytorch/utils/kronecker_product.py -> build/bdist.linux-x86_64/egg/gpytorch/utils
copying build/lib.linux-x86_64-3.5/gpytorch/utils/circulant.py -> build/bdist.linux-x86_64/egg/gpytorch/utils
creating build/bdist.linux-x86_64/egg/gpytorch/kernels
copying build/lib.linux-x86_64-3.5/gpytorch/kernels/init.py -> build/bdist.linux-x86_64/egg/gpytorch/kernels
copying build/lib.linux-x86_64-3.5/gpytorch/kernels/grid_interpolation_kernel.py -> build/bdist.linux-x86_64/egg/gpytorch/kernels
copying build/lib.linux-x86_64-3.5/gpytorch/kernels/kernel.py -> build/bdist.linux-x86_64/egg/gpytorch/kernels
copying build/lib.linux-x86_64-3.5/gpytorch/kernels/rbf_kernel.py -> build/bdist.linux-x86_64/egg/gpytorch/kernels
copying build/lib.linux-x86_64-3.5/gpytorch/kernels/spectral_mixture_kernel.py -> build/bdist.linux-x86_64/egg/gpytorch/kernels
copying build/lib.linux-x86_64-3.5/gpytorch/kernels/index_kernel.py -> build/bdist.linux-x86_64/egg/gpytorch/kernels
creating build/bdist.linux-x86_64/egg/gpytorch/libfft
copying build/lib.linux-x86_64-3.5/gpytorch/libfft/init.py -> build/bdist.linux-x86_64/egg/gpytorch/libfft
copying build/lib.linux-x86_64-3.5/gpytorch/libfft/_libfft.abi3.so -> build/bdist.linux-x86_64/egg/gpytorch/libfft
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/means/init.py to init.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/means/mean.py to mean.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/means/constant_mean.py to constant_mean.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/gp_model.py to gp_model.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/init.py to init.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/init.py to init.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/constant_random_variable.py to constant_random_variable.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/independent_random_variables.py to independent_random_variables.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/samples_random_variable.py to samples_random_variable.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/gaussian_random_variable.py to gaussian_random_variable.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/batch_random_variables.py to batch_random_variables.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/random_variable.py to random_variable.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/categorical_random_variable.py to categorical_random_variable.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/random_variables/bernoulli_random_variable.py to bernoulli_random_variable.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/likelihoods/init.py to init.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/likelihoods/likelihood.py to likelihood.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/likelihoods/gaussian_likelihood.py to gaussian_likelihood.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/likelihoods/bernoulli_likelihood.py to bernoulli_likelihood.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/lazy/init.py to init.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/lazy/kronecker_product_lazy_variable.py to kronecker_product_lazy_variable.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/lazy/toeplitz_lazy_variable.py to toeplitz_lazy_variable.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/lazy/lazy_variable.py to lazy_variable.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/module.py to module.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/inference/init.py to init.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models/init.py to init.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models/gp_posterior.py to gp_posterior.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models/exact_gp_posterior.py to exact_gp_posterior.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/inference/posterior_models/variational_gp_posterior.py to variational_gp_posterior.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/inference/inference.py to inference.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/functions/init.py to init.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/functions/log_normal_cdf.py to log_normal_cdf.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/functions/normal_cdf.py to normal_cdf.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/functions/dsmm.py to dsmm.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/functions/add_diag.py to add_diag.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/toeplitz.py to toeplitz.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/interpolation.py to interpolation.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/init.py to init.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/lincg.py to lincg.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/fft.py to fft.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/lanczos_quadrature.py to lanczos_quadrature.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/function_factory.py to function_factory.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/kronecker_product.py to kronecker_product.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/utils/circulant.py to circulant.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/kernels/init.py to init.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/kernels/grid_interpolation_kernel.py to grid_interpolation_kernel.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/kernels/kernel.py to kernel.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/kernels/rbf_kernel.py to rbf_kernel.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/kernels/spectral_mixture_kernel.py to spectral_mixture_kernel.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/kernels/index_kernel.py to index_kernel.cpython-35.pyc
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/libfft/init.py to init.cpython-35.pyc
creating stub loader for gpytorch/libfft/_libfft.abi3.so
byte-compiling build/bdist.linux-x86_64/egg/gpytorch/libfft/_libfft.py to _libfft.cpython-35.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying gpytorch.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying gpytorch.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying gpytorch.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying gpytorch.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying gpytorch.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
writing build/bdist.linux-x86_64/egg/EGG-INFO/native_libs.txt
zip_safe flag not set; analyzing archive contents...
gpytorch.libfft.pycache._libfft.cpython-35: module references file
creating 'dist/gpytorch-0.1-py3.5-linux-x86_64.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing gpytorch-0.1-py3.5-linux-x86_64.egg
removing '/usr/local/lib/python3.5/dist-packages/gpytorch-0.1-py3.5-linux-x86_64.egg' (and everything under it)
creating /usr/local/lib/python3.5/dist-packages/gpytorch-0.1-py3.5-linux-x86_64.egg
Extracting gpytorch-0.1-py3.5-linux-x86_64.egg to /usr/local/lib/python3.5/dist-packages
gpytorch 0.1 is already the active version in easy-install.pth

Installed /usr/local/lib/python3.5/dist-packages/gpytorch-0.1-py3.5-linux-x86_64.egg
Processing dependencies for gpytorch==0.1
Searching for cffi==1.10.0
Best match: cffi 1.10.0
Adding cffi 1.10.0 to easy-install.pth file

Using /usr/local/lib/python3.5/dist-packages
Searching for pycparser==2.18
Best match: pycparser 2.18
Adding pycparser 2.18 to easy-install.pth file

Using /usr/local/lib/python3.5/dist-packages
Finished processing dependencies for gpytorch==0.1


$ python
Python 3.5.2 (default, Nov 17 2016, 17:05:23)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.

import gpytorch
Traceback (most recent call last):
File "", line 1, in
File "/home/ubuntu/gpytorch-master/gpytorch/init.py", line 3, in
from .lazy import LazyVariable, ToeplitzLazyVariable
File "/home/ubuntu/gpytorch-master/gpytorch/lazy/init.py", line 2, in
from .toeplitz_lazy_variable import ToeplitzLazyVariable
File "/home/ubuntu/gpytorch-master/gpytorch/lazy/toeplitz_lazy_variable.py", line 4, in
from gpytorch.utils import toeplitz
File "/home/ubuntu/gpytorch-master/gpytorch/utils/toeplitz.py", line 2, in
import gpytorch.utils.fft as fft
File "/home/ubuntu/gpytorch-master/gpytorch/utils/fft.py", line 1, in
from .. import libfft
File "/home/ubuntu/gpytorch-master/gpytorch/libfft/init.py", line 3, in
from ._libfft import lib as _lib, ffi as _ffi
ImportError: No module named 'gpytorch.libfft._libfft'

ImportError: No module named 'gpytorch.random_variables'

import gpytorch
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.5/dist-packages/gpytorch-0.1-py3.5.egg/gpytorch/init.py", line 1, in
File "/usr/local/lib/python3.5/dist-packages/gpytorch-0.1-py3.5.egg/gpytorch/distribution.py", line 3, in
ImportError: No module named 'gpytorch.random_variables'

Add simple api for combining kernels

There should be a simple API for combining kernels, which allows concise syntax and a lot of flexibility, while at the same time preventing users from shooting themselves in the foot (e.g. break closure by subtracting kernels, etc).

@darbour has some prototype of this, which defines the appropriate operators on the base Kernel class, e.g.:

# in Kernel
    def __add__(self, other):
        return AdditiveKernel(self, other)

class AdditiveKernel(Kernel):
    def __init__(self, kernel_1, kernel_2):
        super(AdditiveKernel, self).__init__()
        self.kernel_1 = kernel_1
        self.kernel_2 = kernel_2

    def forward(self, x1, x2):
        return self.kernel_1(x1, x2) + self.kernel_2(x1, x2)

Add support for ARD

The current Kernels all use a single length scale. Adding support for ARD is straightforward by adding a Parameter for every dimension. E.g. for the RBFKernel this could be something like the following:

class RBFKernel(Kernel):

    def __init__(
        self,
        log_lengthscale_bounds=(-10000, 10000),
        eps=1e-5,
        ARD=False,
        ndim=None,
    ):
        super(RBFKernel, self).__init__()
        if ARD and ndim is None:
            raise ValueError("Must provide ndim if ARD=True")
        self.eps = eps
        nparams = ndim if ARD else 1
        if ARD:
            log_lengthscale_bounds = (
                torch.tensor(log_lengthscale_bounds[0]).repeat(1, nparams),
                torch.tensor(log_lengthscale_bounds[1]).repeat(1, nparams),
            )
        self.register_parameter(
            'log_lengthscale',
            nn.Parameter(torch.zeros(1, nparams)),
            bounds=log_lengthscale_bounds,
        )

However, we don't want to have to re-implement this for every Kernel, so we should come up with a better abstraction for this.

Matern kernels handle inhomogeneous lengthscales incorrectly when using ARD

The way the distances normalized by lengthscales are being computed works fine if the lengthscales are the same for each input dimension, but fails under ARD.

We can simplify the computation similar to the RBF kernel. This may have some perf implications for small size problems on the CPU, but some simulations suggest this will also speed up things on the GPU.

Will send out a pull request shortly.

Implement a generic `root_decomposition` and `root_decomposition_size`.

There might be a better name for this, but this would essentially use the Lanczos decomposition to create a Cholesky approximation. This would be useful for a number of reasons:

  • We can then use any lazy variable to sample from a Gaussian distribution
  • There's a nice refactor to MulLazyVariable, where all the Lanczos stuff would be hidden inside LazyVariable.

Bug with GridInterpolationKernel and Variables with 'requires_grad=True'

Currently trying to evaluate a GP using GridInterpolationKernel on test points stored as torch.nn.Parameter.

Line 89 in utils/interpolation.py results in an assertion error from
assert not ctx.needs_input_grad[2] in torch/autograd/_functions/tensor.py

You can produce this error by putting
x = Variable(torch.Tensor(1, 2).uniform_(), requires_grad=True)
model(x)
at the end of the kissgp_kronecker_product_regression.ipython notebook in examples.

This issue does not occur when using RBFKernel or SpectralMixtureKernel
I reproduced this issue after doing a fresh pull and install of gpytorch, traceback attached.

traceback.txt

Windows installation problem

Hello,

I'm currently trying to install gpytorch for Windows 10 and keep running into errors. I have installed both pytorch v0.3.1 and gotten the fftw-3.3.5-dll64 files, which I have added to my path.

The first error happened when I ran the "conda install fftw cffi pytorch torchvision cuda80 -c conda-forge -c pytorch" command. I kept running into this error

image

yet torchvision shows up in conda list
image

I tried installing without torchvision as a requirement (I know it's not the greatest idea, I was mostly curious). The installation worked, but when I tried to run the second command, i.e "pip install git+https://github.com/cornellius-gp/gpytorch.git" I ran into this

image

I have spent the last hour or so going from stackoverflow thread to stackoverflow thread, I have uninstalled and reinstalled pretty much everything I could (pip, setuptools...) but nothing has worked so far.

Would you be able to tell me what causes this? Thanks

test_latent_multitask_gp_mean_abs_error

I run python -m pytest one failure occurs.

def test_latent_multitask_gp_mean_abs_error():
        prior_observation_model = LatentMultitaskGPModel(num_task_samples=3)

        # Compute posterior distribution
        infer = Inference(prior_observation_model)
        posterior_observation_model = infer.run(
            (torch.cat([train_x, train_x, train_x]), torch.cat([y11_inds, y12_inds, y2_inds])),
            torch.cat([train_y11, train_y12, train_y2]),
>           max_inference_steps=5
        )

test/examples/latent_multitask_gp_regression_test.py:84:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
gpytorch/inference/inference.py:79: in run
    new_observation_model = self.run_(train_x, train_y, optimize=optimize, **kwargs)
gpytorch/inference/inference.py:65: in run_
    param_group.update(log_likelihood_closure)
gpytorch/parameters/mle_parameter_group.py:38: in update
    loss = optimizer.step(step_closure)
gpytorch/utils/lbfgs.py:203: in step
    t = self._backtracking(closure, d)
gpytorch/utils/lbfgs.py:305: in _backtracking
    phi_k = closure().data[0]
gpytorch/utils/__init__.py:33: in wrapped_function
    raise e
gpytorch/utils/__init__.py:21: in wrapped_function
    result = function(*args, **kwargs)
gpytorch/parameters/mle_parameter_group.py:34: in step_closure
    loss = -log_likelihood_closure()
gpytorch/inference/inference.py:45: in log_likelihood_closure
    return self.observation_model.marginal_log_likelihood(output, train_y)
gpytorch/inference/posterior_models.py:203: in marginal_log_likelihood
    return gpytorch.exact_gp_marginal_log_likelihood(covar, train_y - mean)
gpytorch/__init__.py:17: in exact_gp_marginal_log_likelihood
    return ExactGPMarginalLogLikelihood()(covar, target)
gpytorch/math/functions/invmv.py:9: in __call__
    res = super(Invmv, self).__call__(matrix_var, vector_var.view(-1, 1))
gpytorch/math/functions/invmm.py:45: in __call__
    has_completed = chol_data_closure()
gpytorch/utils/__init__.py:33: in wrapped_function
    raise e
gpytorch/utils/__init__.py:21: in wrapped_function
    result = function(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

    @pd_catcher(catch_function=add_jitter)
    def chol_data_closure():
>       input_1_var.chol_data = input_1_var.data.potrf()
E       RuntimeError: Lapack Error in potrf : the leading minor of order 1 is not positive definite at /Users/soumith/code/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorLapack.c:608

gpytorch/math/functions/invmm.py:40: RuntimeError

GPyTorch Does Not Find FFTW Installation From Conda Forge

Similar to #89, I and Eric (who I just met with and showed me this) can verify that it seems like the standard installation instructions don't find fftw as installed from conda-forge.

Things do work when installing fftw from source or from apt-get, so we should either figure out how to add the correct paths to where conda installs fftw, or change the installation instructions to recommend installing from apt-get or from source.

White noise kernel?

Hi there,

Thanks for the excellent library. Is it possible to either add a white noise kernel or have an option to include input variance to the diagonal of the kernel matrix during fitting? If I have known variance on my input data, it's not clear to me how I can incorporate that in the model examples.

ImportError: cannot import name 'SumVariationalStrategy'

Here's a minimal test code.

import torch

from gpytorch.random_variables import GaussianRandomVariable
from torch.autograd import Variable

a = GaussianRandomVariable(Variable(torch.rand(3)), Variable(torch.rand(3, 3)))
b = GaussianRandomVariable(Variable(torch.rand(3)), Variable(torch.rand(3, 3)))

print(a + b)

The output is

Traceback (most recent call last):
File "", line 1, in
File "~/packages/anaconda3/lib/python3.6/site-packages/gpytorch/random_variables/gaussian_random_variable.py", line 56, in add
from ..variational import SumVariationalStrategy
ImportError: cannot import name 'SumVariationalStrategy'


The reason may be in the method "gpytorch.random_variables.gaussian_random_variable.add", where in line 56, it tries to

from ..variational import SumVariationalStrategy

However, in "gpytorch/gpytorch/variational/init.py", there's no SumVariationalStrategy being imported.

from .variational_strategy import VariationalStrategy
from .mvn_variational_strategy import MVNVariationalStrategy


__all__ = [
    VariationalStrategy,
    MVNVariationalStrategy,
]

Model fitting fails when using MaternKernel

Using self.covar_module = MaternKernel(2.5, log_lengthscale_bounds=(-5, 5)) instead of RBFKernel(log_lengthscale_bounds=(-5, 5)) in the basic simple_gp_regression.py example results in the stack trace below in the first iteration of the optimization:

Iter 1/50 - Loss: 1.203   log_lengthscale: 0.000   log_noise: 0.000
---------------------------------------------------------------------------
UnboundLocalError                         Traceback (most recent call last)
<ipython-input-12-0cb2960c7ba2> in <module>()
     18     output = model(train_x)
     19     # Calc loss and backprop gradients
---> 20     loss = -mll(output, train_y)
     21     loss.backward()
     22     print('Iter %d/%d - Loss: %.3f   log_lengthscale: %.3f   log_noise: %.3f' % (

/data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/module.py in __call__(self, *inputs, **kwargs)
    159 
    160     def __call__(self, *inputs, **kwargs):
--> 161         outputs = self.forward(*inputs, **kwargs)
    162         if isinstance(outputs, Variable) or isinstance(outputs, RandomVariable) or isinstance(outputs, LazyVariable):
    163             return outputs

/data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/mlls/exact_marginal_log_likelihood.py in forward(self, output, target)
     32 
     33         # Get log determininat and first part of quadratic form
---> 34         inv_quad, log_det = covar.inv_quad_log_det(inv_quad_rhs=target.unsqueeze(-1), log_det=True)
     35         res = -0.5 * sum([
     36             inv_quad,

/data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/lazy/lazy_variable.py in inv_quad_log_det(self, inv_quad_rhs, log_det)
    393                                                     log_det=log_det,
    394                                                     preconditioner=self._preconditioner()
--> 395                                                     )(*(list(args) + [inv_quad_rhs]))
    396 
    397     def log_det(self):

/data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/utils/function_factory.py in forward(self, *args)
    237                 solves, t_mat = linear_cg(matmul_closure, rhs, n_tridiag=num_random_probes,
    238                                           max_iter=settings.max_lanczos_quadrature_iterations.value(),
--> 239                                           preconditioner=self.preconditioner)
    240             else:
    241                 solves = linear_cg(matmul_closure, rhs, n_tridiag=num_random_probes,

/data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/utils/linear_cg.py in linear_cg(matmul_closure, rhs, n_tridiag, tolerance, eps, max_iter, initial_guess, preconditioner)
     88         else:
     89             t_mat = residual.new(n_iter, n_iter, n_tridiag).zero_()
---> 90             alpha_reciprocal = alpha.new(n_tridiag)
     91 
     92         prev_alpha_reciprocal = alpha.new(alpha_reciprocal.size())

UnboundLocalError: local variable 'alpha' referenced before assignment

Ensure compatibility with breaking changes in pytorch master branch

This is a run of the simple_gp_regression example notebook on the current alpha_release branch. Running kissgp_gp_regression_cuda yields similar errors

import math
import torch
import gpytorch
from matplotlib import pyplot as plt

%matplotlib inline
%load_ext autoreload
%autoreload 2
from torch.autograd import Variable
# Training data is 11 points in [0,1] inclusive regularly spaced
train_x = Variable(torch.linspace(0, 1, 11))
# True function is sin(2*pi*x) with Gaussian noise N(0,0.04)
train_y = Variable(torch.sin(train_x.data * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2)
from torch import optim
from gpytorch.kernels import RBFKernel
from gpytorch.means import ConstantMean
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.random_variables import GaussianRandomVariable
# We will use the simplest form of GP model, exact inference
class ExactGPModel(gpytorch.models.ExactGP):
    def __init__(self, train_x, train_y, likelihood):
        super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
        # Our mean function is constant in the interval [-1,1]
        self.mean_module = ConstantMean(constant_bounds=(-1, 1))
        # We use the RBF kernel as a universal approximator
        self.covar_module = RBFKernel(log_lengthscale_bounds=(-5, 5))
    
    def forward(self, x):
        mean_x = self.mean_module(x)
        covar_x = self.covar_module(x)
        # Return moddl output as GaussianRandomVariable
        return GaussianRandomVariable(mean_x, covar_x)

# initialize likelihood and model
likelihood = GaussianLikelihood(log_noise_bounds=(-5, 5))
model = ExactGPModel(train_x.data, train_y.data, likelihood)
# Find optimal model hyperparameters
model.train()
likelihood.train()

# Use adam optimizer on model and likelihood parameters
optimizer = optim.Adam(list(model.parameters()) + list(likelihood.parameters()), lr=0.1)
optimizer.n_iter = 0

training_iter = 50
for i in range(training_iter):
    # Zero gradients from previous iteration
    optimizer.zero_grad()
    # Output from model
    output = model(train_x)
    # Calc loss and backprop gradients
    loss = -model.marginal_log_likelihood(likelihood, output, train_y)
    loss.backward()
    optimizer.n_iter += 1
    print('Iter %d/%d - Loss: %.3f   log_lengthscale: %.3f   log_noise: %.3f' % (
        i + 1, training_iter, loss.data[0],
        model.covar_module.log_lengthscale.data[0, 0],
        model.likelihood.log_noise.data[0]
    ))
    optimizer.step()
---------------------------------------------------------------------------

TypeError                                 Traceback (most recent call last)

<ipython-input-8-bdcf88774fd0> in <module>()
     14     output = model(train_x)
     15     # Calc loss and backprop gradients
---> 16     loss = -model.marginal_log_likelihood(likelihood, output, train_y)
     17     loss.backward()
     18     optimizer.n_iter += 1


/data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/models/exact_gp.py in marginal_log_likelihood(self, likelihood, output, target, n_data)
     43             raise RuntimeError('You must train on the training targets!')
     44 
---> 45         mean, covar = likelihood(output).representation()
     46         n_data = target.size(-1)
     47         return gpytorch.exact_gp_marginal_log_likelihood(covar, target - mean).div(n_data)


/data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/module.py in __call__(self, *inputs, **kwargs)
    158                 raise RuntimeError('Input must be a RandomVariable or Variable, was a %s' %
    159                                    input.__class__.__name__)
--> 160         outputs = self.forward(*inputs, **kwargs)
    161         if isinstance(outputs, Variable) or isinstance(outputs, RandomVariable) or isinstance(outputs, LazyVariable):
    162             return outputs


/data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/likelihoods/gaussian_likelihood.py in forward(self, input)
     14         assert(isinstance(input, GaussianRandomVariable))
     15         mean, covar = input.representation()
---> 16         noise = gpytorch.add_diag(covar, self.log_noise.exp())
     17         return GaussianRandomVariable(mean, noise)


/data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/__init__.py in add_diag(input, diag)
     36         return input.add_diag(diag)
     37     else:
---> 38         return _add_diag(input, diag)
     39 
     40 


/data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/functions/__init__.py in add_diag(input, diag)
     18                        component added.
     19     """
---> 20     return AddDiag()(input, diag)
     21 
     22 


/data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/experimental/ae/bento_kernel_ae_experimental#link-tree/gpytorch/functions/add_diag.py in forward(self, input, diag)
     12         if input.ndimension() == 3:
     13             diag_mat = diag_mat.unsqueeze(0).expand_as(input)
---> 14         return diag_mat.mul_(val).add_(input)
     15 
     16     def backward(self, grad_output):


TypeError: mul_ received an invalid combination of arguments - got (Variable), but expected one of:
 * (float value)
      didn't match because some of the arguments have invalid types: (!Variable!)
 * (torch.FloatTensor other)
      didn't match because some of the arguments have invalid types: (!Variable!)

pytorch 0.2 problem

from gpytorch.inference import Inference
infer = Inference(prior_observation_model)
posterior_observation_model = infer.run(train_x, train_y, max_inference_steps=20)


RuntimeError Traceback (most recent call last)
in ()
1 from gpytorch.inference import Inference
2 infer = Inference(prior_observation_model)
----> 3 posterior_observation_model = infer.run(train_x, train_y, max_inference_steps=20)

/usr/local/lib/python3.5/dist-packages/gpytorch/inference/inference.py in run(self, train_x, train_y, optimize, **kwargs)
77 orig_observation_model = self.observation_model
78 self.observation_model = deepcopy(self.observation_model)
---> 79 new_observation_model = self.run_(train_x, train_y, optimize=optimize, **kwargs)
80 self.observation_model = orig_observation_model
81 return new_observation_model

/usr/local/lib/python3.5/dist-packages/gpytorch/inference/inference.py in run_(self, train_x, train_y, inducing_points, optimize, max_inference_steps, **kwargs)
63 for i in range(max_inference_steps):
64 for param_group in param_groups:
---> 65 param_group.update(log_likelihood_closure)
66
67 has_converged = all([param_group.has_converged(log_likelihood_closure) for param_group in param_groups])

/usr/local/lib/python3.5/dist-packages/gpytorch/parameters/mle_parameter_group.py in update(self, log_likelihood_closure)
36 return loss
37
---> 38 loss = optimizer.step(step_closure)
39 if isinstance(loss, Variable):
40 self.previous_loss = loss.data.squeeze()[0]

/usr/local/lib/python3.5/dist-packages/gpytorch/utils/lbfgs.py in step(self, closure)
99
100 # evaluate initial f(x) and df/dx
--> 101 orig_loss = closure()
102 loss = orig_loss.data[0]
103 current_evals = 1

/usr/local/lib/python3.5/dist-packages/gpytorch/utils/init.py in wrapped_function(*args, **kwargs)
31
32 else:
---> 33 raise e
34
35 return result

/usr/local/lib/python3.5/dist-packages/gpytorch/utils/init.py in wrapped_function(*args, **kwargs)
19 def wrapped_function(*args, **kwargs):
20 try:
---> 21 result = function(*args, **kwargs)
22 self.n_trials = 0
23

/usr/local/lib/python3.5/dist-packages/gpytorch/parameters/mle_parameter_group.py in step_closure()
32 optimizer.zero_grad()
33 optimizer.n_iter += 1
---> 34 loss = -log_likelihood_closure()
35 loss.backward()
36 return loss

/usr/local/lib/python3.5/dist-packages/gpytorch/inference/inference.py in log_likelihood_closure()
54 self.observation_model.zero_grad()
55 output = self.observation_model.forward(*inducing_points)
---> 56 return self.observation_model.marginal_log_likelihood(output, train_y)
57
58 if optimize:

/usr/local/lib/python3.5/dist-packages/gpytorch/inference/posterior_models.py in marginal_log_likelihood(self, output, train_y, num_samples)
131
132 kl_divergence = gpytorch.mvn_kl_divergence(self.variational_parameters.variational_mean,
--> 133 chol_var_covar, inducing_mean, inducing_covar)
134
135 return log_likelihood.squeeze() - kl_divergence

/usr/local/lib/python3.5/dist-packages/gpytorch/init.py in mvn_kl_divergence(mean_1, chol_covar_1, mean_2, covar_2)
35
36 def mvn_kl_divergence(mean_1, chol_covar_1, mean_2, covar_2):
---> 37 return MVNKLDivergence()(mean_1, chol_covar_1, mean_2, covar_2)

/usr/local/lib/python3.5/dist-packages/gpytorch/math/functions/mvn_kl_divergence.py in call(self, mu1_var, chol_covar1_var, mu2_var, covar2_var)
32 # Multiplying that by -2 gives us two of the terms in the KL divergence
33 # (plus an unwanted constant that we can subtract out).
---> 34 K_part = ExactGPMarginalLogLikelihood()(covar2_var, mu_diffs)
35
36 # Get logdet(\Sigma_{1})

/usr/local/lib/python3.5/dist-packages/gpytorch/math/functions/invmv.py in call(self, matrix_var, vector_var)
7 """
8 def call(self, matrix_var, vector_var):
----> 9 res = super(Invmv, self).call(matrix_var, vector_var.view(-1, 1))
10 return res.view(-1)

/usr/local/lib/python3.5/dist-packages/gpytorch/math/functions/invmm.py in call(self, input_1_var, input_2_var)
48 orig_data = input_1_var.data
49 input_1_var.data = input_1_var.chol_data
---> 50 res = super(Invmm, self).call(input_1_var, input_2_var)
51
52 # Revert back to original data

/usr/local/lib/python3.5/dist-packages/gpytorch/math/functions/exact_gp_marginal_log_likelihood.py in forward(self, chol_mat, y)
7 def forward(self, chol_mat, y):
8 mat_inv_y = y.potrs(chol_mat)
----> 9 res = mat_inv_y.dot(y) # Inverse quad
10 res += chol_mat.diag().log_().sum() * 2 # Log determinant
11 res += math.log(2 * math.pi) * len(y)

RuntimeError: Expected argument self to have 1 dimension(s), but has 2 at /pytorch/torch/csrc/generic/TensorMethods.cpp:23020

Add proper support for priors

While it's relatively straightforward to put priors on the hyperparameters (e.g. length scales, noise) by simply adding the appropriate log probabilities to the marginal_log_likelihood, this is not very user friendly.

There are some remnants of support for priors (e.g. https://github.com/cornellius-gp/gpytorch/blob/master/gpytorch/module.py#L101), but those don't seem to do anything.

We should add the ability to register a prior for a given parameter when registering that parameter (let's worry about joint priors over multiple parameters later). With the much improved distributions module in 0.4, I suggest we just use those Distribution objects directly for the priors.

Having first-class support for priors will also allow us to eliminate some funkiness with how the log-lengthscale bounds are being handled, by using an appropriate penalty prior that smoothly goes to zero at the boundary of the specified range.

Installation error

Hi, while installing the package I get the following error.

Collecting git+https://github.com/cornellius-gp/gpytorch.git
Cloning https://github.com/cornellius-gp/gpytorch.git to /tmp/pip-7qp9fn9f-build
Requirement already satisfied: cffi>=1.4.0 in ./.conda/envs/test_env/lib/python3.5/site-packages (from gpytorch==0.1)
Requirement already satisfied: pycparser in ./.conda/envs/test_env/lib/python3.5/site-packages (from cffi>=1.4.0->gpytorch==0.1)
Installing collected packages: gpytorch
  Running setup.py install for gpytorch ... error
    Complete output from command /u/18/gadichs1/unix/.conda/envs/test_env/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-7qp9fn9f-build/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-04cqd2t7-record/install-record.txt --single-version-externally-managed --compile:
    running install
    running build
    running build_py
    creating build
    creating build/lib.linux-x86_64-3.5
    creating build/lib.linux-x86_64-3.5/gpytorch
    copying gpytorch/module.py -> build/lib.linux-x86_64-3.5/gpytorch
    copying gpytorch/beta_features.py -> build/lib.linux-x86_64-3.5/gpytorch
    copying gpytorch/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch
    copying gpytorch/settings.py -> build/lib.linux-x86_64-3.5/gpytorch
    creating build/lib.linux-x86_64-3.5/test
    copying test/__init__.py -> build/lib.linux-x86_64-3.5/test
    creating build/lib.linux-x86_64-3.5/gpytorch/kernels
    copying gpytorch/kernels/grid_interpolation_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
    copying gpytorch/kernels/linear_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
    copying gpytorch/kernels/kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
    copying gpytorch/kernels/additive_grid_interpolation_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
    copying gpytorch/kernels/multiplicative_grid_interpolation_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
    copying gpytorch/kernels/grid_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
    copying gpytorch/kernels/matern_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
    copying gpytorch/kernels/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
    copying gpytorch/kernels/index_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
    copying gpytorch/kernels/spectral_mixture_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
    copying gpytorch/kernels/rbf_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
    copying gpytorch/kernels/periodic_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
    creating build/lib.linux-x86_64-3.5/gpytorch/mlls
    copying gpytorch/mlls/exact_marginal_log_likelihood.py -> build/lib.linux-x86_64-3.5/gpytorch/mlls
    copying gpytorch/mlls/marginal_log_likelihood.py -> build/lib.linux-x86_64-3.5/gpytorch/mlls
    copying gpytorch/mlls/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/mlls
    copying gpytorch/mlls/variational_marginal_log_likelihood.py -> build/lib.linux-x86_64-3.5/gpytorch/mlls
    creating build/lib.linux-x86_64-3.5/gpytorch/models
    copying gpytorch/models/abstract_variational_gp.py -> build/lib.linux-x86_64-3.5/gpytorch/models
    copying gpytorch/models/additive_grid_inducing_variational_gp.py -> build/lib.linux-x86_64-3.5/gpytorch/models
    copying gpytorch/models/exact_gp.py -> build/lib.linux-x86_64-3.5/gpytorch/models
    copying gpytorch/models/grid_inducing_variational_gp.py -> build/lib.linux-x86_64-3.5/gpytorch/models
    copying gpytorch/models/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/models
    copying gpytorch/models/variational_gp.py -> build/lib.linux-x86_64-3.5/gpytorch/models
    copying gpytorch/models/gp.py -> build/lib.linux-x86_64-3.5/gpytorch/models
    creating build/lib.linux-x86_64-3.5/gpytorch/functions
    copying gpytorch/functions/dsmm.py -> build/lib.linux-x86_64-3.5/gpytorch/functions
    copying gpytorch/functions/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/functions
    copying gpytorch/functions/log_normal_cdf.py -> build/lib.linux-x86_64-3.5/gpytorch/functions
    copying gpytorch/functions/normal_cdf.py -> build/lib.linux-x86_64-3.5/gpytorch/functions
    copying gpytorch/functions/add_diag.py -> build/lib.linux-x86_64-3.5/gpytorch/functions
    creating build/lib.linux-x86_64-3.5/gpytorch/variational
    copying gpytorch/variational/variational_strategy.py -> build/lib.linux-x86_64-3.5/gpytorch/variational
    copying gpytorch/variational/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/variational
    copying gpytorch/variational/mvn_variational_strategy.py -> build/lib.linux-x86_64-3.5/gpytorch/variational
    creating build/lib.linux-x86_64-3.5/gpytorch/means
    copying gpytorch/means/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/means
    copying gpytorch/means/mean.py -> build/lib.linux-x86_64-3.5/gpytorch/means
    copying gpytorch/means/constant_mean.py -> build/lib.linux-x86_64-3.5/gpytorch/means
    creating build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/constant_mul_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/non_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/interpolated_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/kronecker_product_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/matmul_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/mul_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/toeplitz_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/root_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/sum_batch_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/sum_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/block_diagonal_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/diag_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    copying gpytorch/lazy/psd_sum_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
    creating build/lib.linux-x86_64-3.5/gpytorch/likelihoods
    copying gpytorch/likelihoods/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/likelihoods
    copying gpytorch/likelihoods/likelihood.py -> build/lib.linux-x86_64-3.5/gpytorch/likelihoods
    copying gpytorch/likelihoods/bernoulli_likelihood.py -> build/lib.linux-x86_64-3.5/gpytorch/likelihoods
    copying gpytorch/likelihoods/softmax_likelihood.py -> build/lib.linux-x86_64-3.5/gpytorch/likelihoods
    copying gpytorch/likelihoods/gaussian_likelihood.py -> build/lib.linux-x86_64-3.5/gpytorch/likelihoods
    creating build/lib.linux-x86_64-3.5/gpytorch/libfft
    copying gpytorch/libfft/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/libfft
    creating build/lib.linux-x86_64-3.5/gpytorch/random_variables
    copying gpytorch/random_variables/bernoulli_random_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
    copying gpytorch/random_variables/dirichlet_random_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
    copying gpytorch/random_variables/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
    copying gpytorch/random_variables/gaussian_random_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
    copying gpytorch/random_variables/samples_random_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
    copying gpytorch/random_variables/mixture_random_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
    copying gpytorch/random_variables/random_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
    copying gpytorch/random_variables/categorical_random_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
    creating build/lib.linux-x86_64-3.5/gpytorch/utils
    copying gpytorch/utils/toeplitz.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
    copying gpytorch/utils/linear_cg.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
    copying gpytorch/utils/function_factory.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
    copying gpytorch/utils/fft.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
    copying gpytorch/utils/interpolation.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
    copying gpytorch/utils/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
    copying gpytorch/utils/sparse.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
    copying gpytorch/utils/stochastic_lq.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
    copying gpytorch/utils/lanczos.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
    copying gpytorch/utils/circulant.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
    creating build/lib.linux-x86_64-3.5/test/kernels
    copying test/kernels/test_rbf_kernel.py -> build/lib.linux-x86_64-3.5/test/kernels
    copying test/kernels/test_additive_kernel.py -> build/lib.linux-x86_64-3.5/test/kernels
    copying test/kernels/test_linear_kernel.py -> build/lib.linux-x86_64-3.5/test/kernels
    copying test/kernels/__init__.py -> build/lib.linux-x86_64-3.5/test/kernels
    copying test/kernels/test_periodic_kernel.py -> build/lib.linux-x86_64-3.5/test/kernels
    creating build/lib.linux-x86_64-3.5/test/functions
    copying test/functions/test_dsmm.py -> build/lib.linux-x86_64-3.5/test/functions
    copying test/functions/__init__.py -> build/lib.linux-x86_64-3.5/test/functions
    copying test/functions/test_log_normal_cdf_test.py -> build/lib.linux-x86_64-3.5/test/functions
    copying test/functions/test_add_diag.py -> build/lib.linux-x86_64-3.5/test/functions
    creating build/lib.linux-x86_64-3.5/test/lazy
    copying test/lazy/test_root_lazy_variable.py -> build/lib.linux-x86_64-3.5/test/lazy
    copying test/lazy/test_diag_lazy_variable.py -> build/lib.linux-x86_64-3.5/test/lazy
    copying test/lazy/test_block_diagonal_lazy_variable.py -> build/lib.linux-x86_64-3.5/test/lazy
    copying test/lazy/test_constant_mul_lazy_variable.py -> build/lib.linux-x86_64-3.5/test/lazy
    copying test/lazy/test_toeplizt_lazy_variable.py -> build/lib.linux-x86_64-3.5/test/lazy
    copying test/lazy/__init__.py -> build/lib.linux-x86_64-3.5/test/lazy
    copying test/lazy/test_interpolated_lazy_variable.py -> build/lib.linux-x86_64-3.5/test/lazy
    copying test/lazy/test_kronecker_product_lazy_variable.py -> build/lib.linux-x86_64-3.5/test/lazy
    copying test/lazy/test_sum_lazy_variable.py -> build/lib.linux-x86_64-3.5/test/lazy
    copying test/lazy/test_matmul_lazy_variable.py -> build/lib.linux-x86_64-3.5/test/lazy
    copying test/lazy/test_mul_lazy_variable_test.py -> build/lib.linux-x86_64-3.5/test/lazy
    creating build/lib.linux-x86_64-3.5/test/util
    copying test/util/test_fft.py -> build/lib.linux-x86_64-3.5/test/util
    copying test/util/test_cubic_interpolation.py -> build/lib.linux-x86_64-3.5/test/util
    copying test/util/test_linear_cg.py -> build/lib.linux-x86_64-3.5/test/util
    copying test/util/test_lanczos.py -> build/lib.linux-x86_64-3.5/test/util
    copying test/util/__init__.py -> build/lib.linux-x86_64-3.5/test/util
    copying test/util/test_sparse.py -> build/lib.linux-x86_64-3.5/test/util
    copying test/util/test_toeplitz.py -> build/lib.linux-x86_64-3.5/test/util
    copying test/util/test_function_factory.py -> build/lib.linux-x86_64-3.5/test/util
    copying test/util/test_interp.py -> build/lib.linux-x86_64-3.5/test/util
    copying test/util/test_tridiag.py -> build/lib.linux-x86_64-3.5/test/util
    copying test/util/test_circulant.py -> build/lib.linux-x86_64-3.5/test/util
    creating build/lib.linux-x86_64-3.5/test/examples
    copying test/examples/test_simple_gp_classification.py -> build/lib.linux-x86_64-3.5/test/examples
    copying test/examples/test_kissgp_multiplicative_regression.py -> build/lib.linux-x86_64-3.5/test/examples
    copying test/examples/test_kissgp_variational_regression.py -> build/lib.linux-x86_64-3.5/test/examples
    copying test/examples/test_kissgp_kronecker_product_classification.py -> build/lib.linux-x86_64-3.5/test/examples
    copying test/examples/test_kissgp_kronecker_product_regression.py -> build/lib.linux-x86_64-3.5/test/examples
    copying test/examples/test_kissgp_additive_classification.py -> build/lib.linux-x86_64-3.5/test/examples
    copying test/examples/test_multitask_gp_regression.py -> build/lib.linux-x86_64-3.5/test/examples
    copying test/examples/test_kissgp_additive_regression.py -> build/lib.linux-x86_64-3.5/test/examples
    copying test/examples/test_kissgp_gp_regression.py -> build/lib.linux-x86_64-3.5/test/examples
    copying test/examples/test_kissgp_gp_classification.py -> build/lib.linux-x86_64-3.5/test/examples
    copying test/examples/__init__.py -> build/lib.linux-x86_64-3.5/test/examples
    copying test/examples/test_spectral_mixture_gp_regression.py -> build/lib.linux-x86_64-3.5/test/examples
    copying test/examples/test_simple_gp_regression.py -> build/lib.linux-x86_64-3.5/test/examples
    running build_ext
    generating cffi module 'build/temp.linux-x86_64-3.5/gpytorch.libfft._libfft.c'
    creating build/temp.linux-x86_64-3.5
    building 'gpytorch.libfft._libfft' extension
    creating build/temp.linux-x86_64-3.5/build
    creating build/temp.linux-x86_64-3.5/build/temp.linux-x86_64-3.5
    creating build/temp.linux-x86_64-3.5/tmp
    creating build/temp.linux-x86_64-3.5/tmp/pip-7qp9fn9f-build
    creating build/temp.linux-x86_64-3.5/tmp/pip-7qp9fn9f-build/gpytorch
    creating build/temp.linux-x86_64-3.5/tmp/pip-7qp9fn9f-build/gpytorch/csrc
    gcc -pthread -B /u/18/gadichs1/unix/.conda/envs/test_env/compiler_compat -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/u/18/gadichs1/unix/.conda/envs/test_env/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include -I/u/18/gadichs1/unix/.conda/envs/test_env/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/TH -I/u/18/gadichs1/unix/.conda/envs/test_env/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/THC -I/u/18/gadichs1/unix/.conda/envs/test_env/include/python3.5m -c build/temp.linux-x86_64-3.5/gpytorch.libfft._libfft.c -o build/temp.linux-x86_64-3.5/build/temp.linux-x86_64-3.5/gpytorch.libfft._libfft.o -std=c99
    gcc -pthread -B /u/18/gadichs1/unix/.conda/envs/test_env/compiler_compat -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/u/18/gadichs1/unix/.conda/envs/test_env/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include -I/u/18/gadichs1/unix/.conda/envs/test_env/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/TH -I/u/18/gadichs1/unix/.conda/envs/test_env/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/THC -I/u/18/gadichs1/unix/.conda/envs/test_env/include/python3.5m -c /tmp/pip-7qp9fn9f-build/gpytorch/csrc/fft.c -o build/temp.linux-x86_64-3.5/tmp/pip-7qp9fn9f-build/gpytorch/csrc/fft.o -std=c99
    /tmp/pip-7qp9fn9f-build/gpytorch/csrc/fft.c:2:19: fatal error: fftw3.h: No such file or directory
    compilation terminated.
    error: command 'gcc' failed with exit status 1
    
    ----------------------------------------
Command "/u/18/gadichs1/unix/.conda/envs/test_env/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-7qp9fn9f-build/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-04cqd2t7-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-7qp9fn9f-build/

I'm on Ubuntu 16.04, here is my conda list.

# Name                    Version                   Build  Channel
ca-certificates           2018.1.18                     0    conda-forge
certifi                   2018.1.18                py35_0    conda-forge
cffi                      1.11.5                   py35_0    conda-forge
cudatoolkit               8.0                           3  
cudnn                     7.0.5                 cuda8.0_0  
fftw                      3.3.7                         0    conda-forge
freetype                  2.8.1                         0    conda-forge
intel-openmp              2018.0.0             hc7b2577_8  
jpeg                      9b                            2    conda-forge
libedit                   3.1                  heed3624_0  
libffi                    3.2.1                hd88cf55_4  
libgcc-ng                 7.2.0                hdf63c60_3  
libgfortran-ng            7.2.0                hdf63c60_3  
libpng                    1.6.34                        0    conda-forge
libstdcxx-ng              7.2.0                hdf63c60_3  
libtiff                   4.0.9                         0    conda-forge
mkl                       2018.0.1             h19d6760_4  
ncurses                   6.0                  h9df7e31_2  
numpy                     1.14.2           py35hdbf6ddf_0  
olefile                   0.45.1                   py35_0    conda-forge
openssl                   1.0.2n                        0    conda-forge
pillow                    5.0.0                    py35_0    conda-forge
pip                       9.0.2                     <pip>
pip                       9.0.1                    py35_5  
pycparser                 2.18                     py35_0    conda-forge
python                    3.5.5                hc3d631a_1  
pytorch                   0.3.1           py35_cuda8.0.61_cudnn7.0.5_2    pytorch
readline                  7.0                  ha6073c6_4  
setuptools                39.0.1
setuptools                38.5.1                   py35_0  
six                       1.11.0                   py35_1    conda-forge
sqlite                    3.22.0               h1bed415_0  
tk                        8.6.7                hc745277_3  
torchvision               0.2.0                     <pip>
torchvision               0.2.0                    py35_0    conda-forge
wheel                     0.30.0           py35hd3883cf_1  
xz                        5.2.3                h55aa19d_2  
zlib                      1.2.11               ha838bed_2  

The gcc version currently is

gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Any help is really appreciated.

Move to unittest testing framework

Let's move the testing framework to python's unittest. The main reason for this that it's easier to integrate continuous testing into external build environments.

LazyVariables should define a get_row

Right now, gpytorch.utils.pivoted_cholesky uses LazyVariable.__getitem__ to access individual rows of LazyVariables. This is usually overkill, since __getitem__ is usually much more general than a get_row method would be, and therefore much more complicated.

For example, we cannot use preconditioners with SGPR right now because InvQuadLazyVariable uses the default __getitem__.

Having LazyVariables define a get_row method that defaults to doing something like return self[i, :] (or self[:, i, :] in batch mode) would be a better solution, because writing special cases of get_row will usually be much easier than writing special cases of __getitem__.

import failure on alpha_release branch

The following import error happens on the alpha_release branch (relevant changes here: 935fca9)

import gpytorch
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
<ipython-input-2-72157b85bfdf> in <module>()
----> 1 import gpytorch

/data/users/balandat/fbsource/fbcode/buck-out/dev-nosan/gen/bento/kernels/bento_kernel_ae_dev#link-tree/gpytorch/__init__.py in <module>()
      1 from .module import Module
----> 2 import models
      3 import means
      4 import kernels
      5 from torch.autograd import Variable

ModuleNotFoundError: No module named 'models'

Install error

Hi,

I just tried to install the library using the instructions on the repository front page, and got this error,

(py35_pytorch) ajay@ajay-h8-1170uk:~/PythonProjects/gpytorch-master$ python setup.py install
running install
running bdist_egg
running egg_info
creating gpytorch.egg-info
writing gpytorch.egg-info/PKG-INFO
writing top-level names to gpytorch.egg-info/top_level.txt
writing requirements to gpytorch.egg-info/requires.txt
writing dependency_links to gpytorch.egg-info/dependency_links.txt
writing manifest file 'gpytorch.egg-info/SOURCES.txt'
reading manifest file 'gpytorch.egg-info/SOURCES.txt'
writing manifest file 'gpytorch.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
creating build
creating build/lib.linux-x86_64-3.5
creating build/lib.linux-x86_64-3.5/gpytorch
copying gpytorch/gp_model.py -> build/lib.linux-x86_64-3.5/gpytorch
copying gpytorch/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch
copying gpytorch/module.py -> build/lib.linux-x86_64-3.5/gpytorch
creating build/lib.linux-x86_64-3.5/gpytorch/lazy
copying gpytorch/lazy/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
copying gpytorch/lazy/lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
copying gpytorch/lazy/toeplitz_lazy_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/lazy
creating build/lib.linux-x86_64-3.5/gpytorch/random_variables
copying gpytorch/random_variables/batch_random_variables.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
copying gpytorch/random_variables/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
copying gpytorch/random_variables/gaussian_random_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
copying gpytorch/random_variables/bernoulli_random_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
copying gpytorch/random_variables/categorical_random_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
copying gpytorch/random_variables/constant_random_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
copying gpytorch/random_variables/independent_random_variables.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
copying gpytorch/random_variables/samples_random_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
copying gpytorch/random_variables/random_variable.py -> build/lib.linux-x86_64-3.5/gpytorch/random_variables
creating build/lib.linux-x86_64-3.5/gpytorch/inference
copying gpytorch/inference/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/inference
copying gpytorch/inference/inference.py -> build/lib.linux-x86_64-3.5/gpytorch/inference
creating build/lib.linux-x86_64-3.5/gpytorch/functions
copying gpytorch/functions/normal_cdf.py -> build/lib.linux-x86_64-3.5/gpytorch/functions
copying gpytorch/functions/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/functions
copying gpytorch/functions/add_diag.py -> build/lib.linux-x86_64-3.5/gpytorch/functions
copying gpytorch/functions/dsmm.py -> build/lib.linux-x86_64-3.5/gpytorch/functions
copying gpytorch/functions/log_normal_cdf.py -> build/lib.linux-x86_64-3.5/gpytorch/functions
copying gpytorch/functions/exact_gp_marginal_log_likelihood.py -> build/lib.linux-x86_64-3.5/gpytorch/functions
copying gpytorch/functions/trace_log_det_quad_form.py -> build/lib.linux-x86_64-3.5/gpytorch/functions
creating build/lib.linux-x86_64-3.5/gpytorch/libfft
copying gpytorch/libfft/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/libfft
creating build/lib.linux-x86_64-3.5/gpytorch/utils
copying gpytorch/utils/kronecker_product.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
copying gpytorch/utils/fft.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
copying gpytorch/utils/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
copying gpytorch/utils/function_factory.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
copying gpytorch/utils/lanczos_quadrature.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
copying gpytorch/utils/toeplitz.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
copying gpytorch/utils/interpolation.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
copying gpytorch/utils/lincg.py -> build/lib.linux-x86_64-3.5/gpytorch/utils
creating build/lib.linux-x86_64-3.5/gpytorch/kernels
copying gpytorch/kernels/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
copying gpytorch/kernels/rbf_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
copying gpytorch/kernels/kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
copying gpytorch/kernels/index_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
copying gpytorch/kernels/grid_interpolation_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
copying gpytorch/kernels/spectral_mixture_kernel.py -> build/lib.linux-x86_64-3.5/gpytorch/kernels
creating build/lib.linux-x86_64-3.5/gpytorch/means
copying gpytorch/means/constant_mean.py -> build/lib.linux-x86_64-3.5/gpytorch/means
copying gpytorch/means/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/means
copying gpytorch/means/mean.py -> build/lib.linux-x86_64-3.5/gpytorch/means
creating build/lib.linux-x86_64-3.5/gpytorch/likelihoods
copying gpytorch/likelihoods/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/likelihoods
copying gpytorch/likelihoods/bernoulli_likelihood.py -> build/lib.linux-x86_64-3.5/gpytorch/likelihoods
copying gpytorch/likelihoods/likelihood.py -> build/lib.linux-x86_64-3.5/gpytorch/likelihoods
copying gpytorch/likelihoods/gaussian_likelihood.py -> build/lib.linux-x86_64-3.5/gpytorch/likelihoods
creating build/lib.linux-x86_64-3.5/gpytorch/inference/posterior_models
copying gpytorch/inference/posterior_models/gp_posterior.py -> build/lib.linux-x86_64-3.5/gpytorch/inference/posterior_models
copying gpytorch/inference/posterior_models/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/inference/posterior_models
copying gpytorch/inference/posterior_models/variational_gp_posterior.py -> build/lib.linux-x86_64-3.5/gpytorch/inference/posterior_models
copying gpytorch/inference/posterior_models/exact_gp_posterior.py -> build/lib.linux-x86_64-3.5/gpytorch/inference/posterior_models
creating build/lib.linux-x86_64-3.5/gpytorch/functions/lazy_toeplitz
copying gpytorch/functions/lazy_toeplitz/toeplitz_trace_log_det_quad_form.py -> build/lib.linux-x86_64-3.5/gpytorch/functions/lazy_toeplitz
copying gpytorch/functions/lazy_toeplitz/__init__.py -> build/lib.linux-x86_64-3.5/gpytorch/functions/lazy_toeplitz
copying gpytorch/functions/lazy_toeplitz/interpolated_toeplitz_gp_marginal_log_likelihood.py -> build/lib.linux-x86_64-3.5/gpytorch/functions/lazy_toeplitz
running build_ext
generating cffi module 'build/temp.linux-x86_64-3.5/gpytorch.libfft._libfft.c'
creating build/temp.linux-x86_64-3.5
building 'gpytorch.libfft._libfft' extension
creating build/temp.linux-x86_64-3.5/build
creating build/temp.linux-x86_64-3.5/build/temp.linux-x86_64-3.5
creating build/temp.linux-x86_64-3.5/home
creating build/temp.linux-x86_64-3.5/home/ajay
creating build/temp.linux-x86_64-3.5/home/ajay/PythonProjects
creating build/temp.linux-x86_64-3.5/home/ajay/PythonProjects/gpytorch-master
creating build/temp.linux-x86_64-3.5/home/ajay/PythonProjects/gpytorch-master/gpytorch
creating build/temp.linux-x86_64-3.5/home/ajay/PythonProjects/gpytorch-master/gpytorch/csrc
gcc -pthread -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -DWITH_CUDA -I/home/ajay/anaconda3/envs/py35_pytorch/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include -I/home/ajay/anaconda3/envs/py35_pytorch/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/TH -I/home/ajay/anaconda3/envs/py35_pytorch/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/THC -I/home/ajay/anaconda3/envs/py35_pytorch/include/python3.5m -c build/temp.linux-x86_64-3.5/gpytorch.libfft._libfft.c -o build/temp.linux-x86_64-3.5/build/temp.linux-x86_64-3.5/gpytorch.libfft._libfft.o
In file included from /home/ajay/anaconda3/envs/py35_pytorch/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/THC/THC.h:4:0,
                 from build/temp.linux-x86_64-3.5/gpytorch.libfft._libfft.c:434:
/home/ajay/anaconda3/envs/py35_pytorch/lib/python3.5/site-packages/torch/utils/ffi/../../lib/include/THC/THCGeneral.h:9:18: fatal error: cuda.h: No such file or directory
 #include "cuda.h"
                  ^
compilation terminated.
error: command 'gcc' failed with exit status 1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.