Coder Social home page Coder Social logo

tensorly-notebooks's Introduction

Tensor methods in Python with TensorLy

This repository contains a series of tutorials and examples on tensor learning, with implementations in Python using TensorLy, and how to combine tensor methods and deep learning using the MXNet, PyTorch and TensorFlow frameworks as backends.

Installation

You will need to have the latest version of TensorLy installed to run these examples as explained in the instructions.

The easiest way is to clone the repository:

git clone https://github.com/tensorly/tensorly
cd tensorly
pip install -e .

Then simply clone this repository:

git clone https://github.com/JeanKossaifi/tensorly_notebooks

You are ready to go!

Table of contents

1 - Tensor basics

2 - Tensor decomposition

3 - Tensor regression

4 - Tensor methods and deep learning with the MXNet backend

5 - Tensor methods and deep learning with the PyTorch backend

6 - Tensor methods and deep learning with the TensorFlow backend

Useful resources

The following are very useful sources of information and I highly recomment you check them out:

tensorly-notebooks's People

Contributors

animakumar avatar arinbjornk avatar asmeurer avatar bkmgit avatar hameerabbasi avatar jacobgil avatar jeankossaifi avatar scopatz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensorly-notebooks's Issues

Need little help regarding regression after autoscaling

Hi,
I have spectroscopic X tensor of 3-way data, and corresponding y vector. My steps as follow:
Spectral matrix 2D (samples x wavelengths) ===> Autoscale, add derivative spectra as the third dimension. Next, I use this 3-way data tensor in tucker regression vs the vector of concentration

I tried to construct a Tucker regression model using X without autoscaling but the RMSE was large however the y_predicted were similar y vector, when tried to autoscale X data using StandardScaler().fit_transform RMSE became accurate but after adding the mean of the y vector to the y_predicted (element wise).

I cant figure out why this happens, I appreciate any help, thanks.

Additionally, I appreciate if you suggested a source to understand this regression type "for beginner in 3-way data"

Thanks

My question

I want to know whether the ‘shape’ and the ‘rank’ in the tucker decomposition are always the same. Could their value be different?How are their value determined?
I'm a beginner in this area. Hope to reply to me.

little bug in tensor_regression_layer_pytorch.ipynb

Hi Jean,

just wanted to point out the in the example tensor_regression_layer_pytorch.ipynb
in the TRL layer forward pass this line regression_weights = tl.tucker_to_tensor(self.core, self.factors)
should instead be regression_weights = tl.tucker_to_tensor((self.core, self.factors)) otherwise you get an error.
This happened to me while working on my code using latest version

Let me know if you get it too running the example

AttributeError: 'Tensor' object has no attribute 'numpy'

Hi, when i used the code below,encountered this problem“AttributeError: 'Tensor' object has no attribute 'numpy'”,can you help me?

My environment is tensorflow=1.15,tensorly=0.6, google colab,

`import tensorflow as tf
tf.enable_eager_execution()

import tensorly as tl
tl.set_backend('tensorflow')
import numpy as np

tfe = tf.contrib.eager

from tensorly.tucker_tensor import tucker_to_tensor

from tensorly import check_random_state
from tensorly.metrics import RMSE

import numpy as np

import sys
from tensorflow.python.framework import ops

import os
random_state = 1234
from tensorly.decomposition import tucker
rng = check_random_state(random_state)
shape=[5,12,120]
ten = tfe.Variable(tl.tensor(rng.random_sample(shape)))
ten1=tfe.Variable(tl.tensor(np.round(np.random.normal(0, 0.1, size=(5, 12, 120)),2)))

c,f = tucker(tf.convert_to_tensor(ten),rank=[2,2,2])`
issuse

I want use trunk when variable tf.variable ,tl.tucker_to_tensor in tf.loop ,Can you provide examples like tensorflow_tucker.ipynb?

Thank you!

Nonnegative Tucker Decomposition error

Hello! I am receiving an error when performing nonnegative Tucker decomposition. Regular Tucker decomposition works fine, and so does PARAFAC and nonnegative PARAFAC.

image

Cannot cast array data from dtype('uint64') to dtype('int64') according to the rule 'safe'

I'm running sparse tucker examples in [07_pydata_sparse_backend]. However when executing tensor.max() , TypeError is reported.

TypeError                                 Traceback (most recent call last)
Input In [9], in <cell line: 2>()
      1 # The most frequently a word has been used in a single paper
----> 2 tensor.max()

File ~/bench/sparse/sparse/_sparse_array.py:444, in SparseArray.max(self, axis, keepdims, out)
    421 def max(self, axis=None, keepdims=False, out=None):
    422     """
    423     Maximize along the given axes. Uses all axes by default.
    424 
   (...)
    442     scipy.sparse.coo_matrix.max : Equivalent Scipy function.
    443     """
--> 444     return np.maximum.reduce(self, out=out, axis=axis, keepdims=keepdims)

File ~/bench/sparse/sparse/_sparse_array.py:307, in SparseArray.__array_ufunc__(self, ufunc, method, *inputs, **kwargs)
    305     result = elemwise(ufunc, *inputs, **kwargs)
    306 elif method == "reduce":
--> 307     result = SparseArray._reduce(ufunc, *inputs, **kwargs)
    308 else:
    309     return NotImplemented

File ~/bench/sparse/sparse/_sparse_array.py:278, in SparseArray._reduce(method, *args, **kwargs)
    275 if isinstance(self, ss.spmatrix):
    276     self = type(self).from_scipy_sparse(self)
--> 278 return self.reduce(method, **kwargs)

File ~/bench/sparse/sparse/_sparse_array.py:360, in SparseArray.reduce(self, method, axis, keepdims, **kwargs)
    358 if not isinstance(axis, tuple):
    359     axis = (axis,)
--> 360 out = self._reduce_calc(method, axis, keepdims, **kwargs)
    361 if len(out) == 1:
    362     return out[0]

File ~/bench/sparse/sparse/_coo/core.py:698, in COO._reduce_calc(self, method, axis, keepdims, **kwargs)
    691 a = self.transpose(neg_axis + axis)
    692 a = a.reshape(
    693     (
    694         np.prod([self.shape[d] for d in neg_axis], dtype=np.intp),
    695         np.prod([self.shape[d] for d in axis], dtype=np.intp),
    696     )
    697 )
--> 698 data, inv_idx, counts = _grouped_reduce(a.data, a.coords[0], method, **kwargs)
    699 n_cols = a.shape[1]
    700 arr_attrs = (a, neg_axis, inv_idx)

File ~/bench/sparse/sparse/_coo/core.py:1574, in _grouped_reduce(x, groups, method, **kwargs)
   1571 # Partial credit to @shoyer
   1572 # Ref: https://gist.github.com/shoyer/f538ac78ae904c936844
   1573 inv_idx, counts = _calc_counts_invidx(groups)
-> 1574 result = method.reduceat(x, inv_idx, **kwargs)
   1575 return result, inv_idx, counts

TypeError: Cannot cast array data from dtype('uint64') to dtype('int64') according to the rule 'safe'

How can I solve this problem?

name 'inner' is not defined

in tensor_regression_layer_pytorch, i got a error:

in forward(self, x)
28 def forward(self, x):
29 regression_weights = tl.tucker_to_tensor(self.core, self.factors)
---> 30 return inner(x, regression_weights, n_modes=tl.ndim(x)-1) + self.bias
31
32 def penalty(self, order=2):

NameError: name 'inner' is not defined

Adding a license file?

Hi,

It would be helpful if you added a license file, to communicate how you want to share this code.
Thanks!

cnn acceleration with tensorly runs slower than without decompostion?

when I run the notebook (05_pytorch_backend/cnn_acceleration_tensorly_and_pytorch.ipynb), I find that the tucker decomposition convolutional layer runs slow. The decomposed VGG16 runs slower than the original VGG16? why? how to evaluate the runtime and calculate the speedup?

Tensor Decomposition Using GPU

This might be a stupid question but I couldn't find a solution anywhere.

When I use gpu to run non-negative decompositions for a random tensor, it is much slower than using a cpu (for various sizes). For reference it takes 0.4 seconds on cpu while it takes more than 10 seconds on gpu to run a single decomposition (size 3x2x2, but the same holds for 100 x 100 x 1000). I have pytorch and cuda 11.1 as well as cudnn on my computer and my gpu is rtx 3070 so it should theoretically beat my cpu?

matrix type must be 'f', 'd', 'F', or 'D'

Hi,
I just run the example notebook "CP-decomposition" and I got the error as below. Do you know the reason for that? The tensorly version I used is '0.4.3'.

Thank you,
X = tl.tensor(np.arange(24).reshape((3, 4, 2)))
factors = parafac(X, rank=2)


ValueError Traceback (most recent call last)
in ()
----> 1 factors = parafac(X, rank=2)

~/tensorly/tensorly/decomposition/candecomp_parafac.py in parafac(tensor, rank, n_iter_max, init, svd, tol, orthogonalise, random_state, verbose, return_errors)
164 orthogonalise = n_iter_max
165
--> 166 factors = initialize_factors(tensor, rank, init=init, svd=svd, random_state=random_state)
167 rec_errors = []
168 norm_tensor = tl.norm(tensor, 2)

~/tensorly/tensorly/decomposition/candecomp_parafac.py in initialize_factors(tensor, rank, init, svd, random_state, non_negative)
100 factors = []
101 for mode in range(tl.ndim(tensor)):
--> 102 U, _, _ = svd_fun(unfold(tensor, mode), n_eigenvecs=rank)
103
104 if tensor.shape[mode] < rank:

~/tensorly/tensorly/backend/numpy_backend.py in partial_svd(matrix, n_eigenvecs)
238 # First choose whether to use X * X.T or X.T *X
239 if dim_1 < dim_2:
--> 240 S, U = scipy.sparse.linalg.eigsh(np.dot(matrix, matrix.T.conj()), k=n_eigenvecs, which='LM')
241 S = np.sqrt(S)
242 V = np.dot(matrix.T.conj(), U * 1/S[None, :])

~/anaconda/lib/python3.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py in eigsh(A, k, M, sigma, which, v0, ncv, maxiter, tol, return_eigenvectors, Minv, OPinv, mode)
1661 params = _SymmetricArpackParams(n, k, A.dtype.char, matvec, mode,
1662 M_matvec, Minv_matvec, sigma,
-> 1663 ncv, v0, maxiter, which, tol)
1664
1665 with _ARPACK_LOCK:

~/anaconda/lib/python3.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py in init(self, n, k, tp, matvec, mode, M_matvec, Minv_matvec, sigma, ncv, v0, maxiter, which, tol)
511
512 _ArpackParams.init(self, n, k, tp, mode, sigma,
--> 513 ncv, v0, maxiter, which, tol)
514
515 if self.ncv > n or self.ncv <= k:

~/anaconda/lib/python3.6/site-packages/scipy/sparse/linalg/eigen/arpack/arpack.py in init(self, n, k, tp, mode, sigma, ncv, v0, maxiter, which, tol)
319
320 if tp not in 'fdFD':
--> 321 raise ValueError("matrix type must be 'f', 'd', 'F', or 'D'")
322
323 if v0 is not None:

ValueError: matrix type must be 'f', 'd', 'F', or 'D'

tensor_regression_layer_pytorch.ipynb | penalty term?

Hello,

Thank you for providing the tensor_regression_layer_pytorch.ipynb example. Most helpful in understanding your paper. Ran the example successfully, reproducing the result

Train Epoch: 19 [0/60000 (0%)] Loss: 0.006197
Train Epoch: 19 [16000/60000 (27%)] Loss: 0.006317
Train Epoch: 19 [32000/60000 (53%)] Loss: 0.006328
Train Epoch: 19 [48000/60000 (80%)] Loss: 0.006382
mean: 1.0914269488182526e-09

Test set: Average loss: 0.0000, Accuracy: 9894/10000 (98%)

However, noticed that in the TRL class, in the member function penalty

    def penalty(self, order=2):
        penalty = tl.norm(self.core, order)
        for f in self.factors:
            penatly = penalty + tl.norm(f, order)
        return penalty

penalty has been inadvertently misspelled as penatly in the line

penatly = penalty + tl.norm(f, order)

which means that the factor penalties are not being included in the penalty sum.

Corrected this and reran the example, which yielded the puzzling result

Train Epoch: 19 [0/60000 (0%)] Loss: 2.302585
Train Epoch: 19 [16000/60000 (27%)] Loss: 2.302585
Train Epoch: 19 [32000/60000 (53%)] Loss: 2.302585
Train Epoch: 19 [48000/60000 (80%)] Loss: 2.302585
mean: 0.00023025853442959487

Test set: Average loss: 0.0002, Accuracy: 980/10000 (9%)

No improvement in the accuracy over the 19 epochs of the run.

In Section 4.1. Implementation
of your paper, it is stated that

In addition we constrain the weights of the tensor regression by applying L2 normalization (Salimans & Kingma, 2016) to the factors of the Tucker decomposition.

At a bit of a loss trying to understand and reconcile what is written in the paper and what is observed in the example. Any insight would be appreciated.

Thank you.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.