Coder Social home page Coder Social logo

jutho / tensoroperations.jl Goto Github PK

View Code? Open in Web Editor NEW
414.0 20.0 56.0 1.96 MB

Julia package for tensor contractions and related operations

Home Page: https://jutho.github.io/TensorOperations.jl/stable/

License: Other

Julia 100.00%
tensor tensor-contraction tensor-permutation tensor-trace tensor-transposition tensor-operations einstein-summation index-notation

tensoroperations.jl's Introduction

TensorOperations.jl

Fast tensor operations using a convenient Einstein index notation.

Documentation Digital Object Identifier Downloads
DOI TensorOperations Downloads
Build Status PkgEval Coverage Quality assurance
CI PkgEval Codecov Aqua QA

What's new in v4

  • The @tensor macro now accepts keyword arguments to facilitate a variety of options that help with debugging, contraction cost and backend selection.

  • Experimental support for automatic differentiation has been added by adding reverse-mode chainrules.

  • The interface for custom types has been changed and thoroughly documented, making it easier to know what to implement. This has as a consequence that more general element types of tensors are now also possible.

  • There is a new interface to work with backends, to allow for dynamic switching between different implementations of the primitive tensor operations or between different strategies for allocating new tensor objects.

  • The support for CuArray objects is moved to a package extension, to avoid unnecessary CUDA dependencies for Julia versions >= 1.9

  • The cache for temporaries has been removed due to its inconsistent and intricate interplay with multithreading. However, the new feature of specifying custom allocation strategies can be used to experiment with novel cache-like behaviour in the future.

WARNING: TensorOperations 4.0 contains several breaking changes and cannot generally be expected to be compatible with previous versions.

Code example

TensorOperations.jl is mostly used through the @tensor macro which allows one to express a given operation in terms of index notation format, a.k.a. Einstein notation (using Einstein's summation convention).

using TensorOperations
α = randn()
A = randn(5, 5, 5, 5, 5, 5)
B = randn(5, 5, 5)
C = randn(5, 5, 5)
D = zeros(5, 5, 5)
@tensor begin
    D[a, b, c] = A[a, e, f, c, f, g] * B[g, b, e] + α * C[c, a, b]
    E[a, b, c] := A[a, e, f, c, f, g] * B[g, b, e] + α * C[c, a, b]
end

In the second to last line, the result of the operation will be stored in the preallocated array D, whereas the last line uses a different assignment operator := in order to define and allocate a new array E of the correct size. The contents of D and E will be equal.

For more detailed information, please see the documentation.

Citing

See CITATION.bib for the relevant reference(s).

tensoroperations.jl's People

Contributors

0 avatar alexmorley avatar austinprivett avatar davidhbrann avatar dependabot[bot] avatar dilumaluthge avatar fatteneder avatar femtocleaner[bot] avatar garrison avatar getzdan avatar giggleliu avatar ho-oto avatar jfeist avatar jishnub avatar juliatagbot avatar jutho avatar kshyatt avatar lkdvos avatar maartenvd avatar marcusps avatar masonprotter avatar mhauru avatar sglyon avatar staticfloat avatar traktofon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tensoroperations.jl's Issues

Support sparse arrays

A = sprand(10,10, 0.1);
B = rand(10,4,3);
@tensor C[x,y,z] = A[x,t] * B[t,y,z]

Does not work:

ERROR: MethodError: no method matching contract!(::Int64, ::SparseMatrixCSC{Float64,Int64}, ::Type{Val{:N}}, ::Array{Float64,3}, ::Type{Val{:N}}, ::Int64, ::Array{Float64,3}, ::Tuple{Int64}, ::Tuple{Int64}, ::Tuple{Int64,Int64}, ::Tuple{Int64}, ::Tuple{Int64,Int64,Int64}, ::Tuple{})

gpu support

Hello, I love the performance and ease of use this package offers, however I was interested in using TensorOperations in a Flux model but gpu is currently not supported
Would it be possible to support CuArrays with TensorOperations?

Many thanks
Jules

Pass Array as contraction order

Hello, Jutho,

I was wondering is it possible to pass array as contraction order, something as below
A= rand(10,10)
B = rand(10,10)
x=[-1,1]
y=[1,-2]
z=[-1,-2]
@tensor test[z] := A[x]*B[y]

I have searched over the issues, but could not find any solutions. I hope you can help me. Thanks very much.

Wangwei Lan

tracing a 1X1 tensor throws BoundsError

Hi,
first of all let me thank you for this great package that I find really useful.
I encountered an issue with v1.0.3 that didn't occur in v0.7.1 where trying to trace a 1X1 tensor produces an error.
minimal example to reproduce:

M = rand(1,1)
@tensor M[] := M[a,a]

this results in the following error:

ERROR: BoundsError: attempt to access ()
  at index [0]
Stacktrace:
 [1] getindex(::Tuple{}, ::Int64) at ./tuple.jl:24
 [2] stride( :Strided.UnsafeStridedView{Float64,0,Float64,typeof(identity)}, ::Int64) at /home/user/.julia/packages/Strided/89nTo/src/unsafestridedview.jl:31
[3] #72 at /home/user/.julia/packages/Strided/89nTo/src/broadcast.jl:66 [inlined]
 [4] ntuple at ./tuple.jl:156 [inlined]
 [5] promoteshape1 at /home/user/.julia/packages/Strided/89nTo/src/broadcast.jl:64 [inlined]
 [6] promoteshape at /home/user/.julia/packages/Strided/89nTo/src/broadcast.jl:49 [inlined]
 [7] _mapreducedim! at /home/user/.julia/packages/Strided/89nTo/src/mapreduce.jl:74 [inlined]
 [8] _trace!(::Int64, ::Strided.UnsafeStridedView{Float64,2,Float64,typeof(identity)}, ::Bool, 
::Strided.UnsafeStridedView{Float64,0,Float64,typeof(identity)}, ::Tuple{}, ::Tuple{Int64}, ::Tuple{Int64}) at /home/user/.julia/packages/TensorOperations/RSY9n/src/implementation/stridedarray.jl:220
 [9] trace!(::Int64, ::Array{Float64,2}, ::Symbol, ::Bool, ::Array{Float64,0}, ::Tuple{}, ::Tuple{Int64}, ::Tuple{Int64}) at ./gcutils.jl:87
 [10] trace!(::Int64, ::Array{Float64,2}, ::Symbol, ::Bool, ::Array{Float64,0}, ::Tuple{}, ::Tuple{}, ::Tuple{Int64}, ::Tuple{Int64}) at /home/user/.julia/packages/TensorOperations/RSY9n/src/implementation/stridedarray.jl:62
 [11] top-level scope at /home/user/.julia/packages/TensorOperations/RSY9n/src/indexnotation/tensormacro.jl:352

I'm on Julia 1.0.3 .

[PkgEval] TensorOperations may have a testing issue on Julia 0.4 (2014-10-08)

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their tests (if available) on both the stable version of Julia (0.3) and the nightly build of the unstable version (0.4). The results of this script are used to generate a package listing enhanced with testing results.

On Julia 0.4

  • On 2014-10-05 the testing status was Tests pass.
  • On 2014-10-08 the testing status changed to Tests fail, but package loads.

Tests pass. means that PackageEvaluator found the tests for your package, executed them, and they all passed.

Tests fail, but package loads. means that PackageEvaluator found the tests for your package, executed them, and they didn't pass. However, trying to load your package with using worked.

Special message from @IainNZ: This change may be due to breaking changes to Dict in JuliaLang/julia#8521, or the removal of deprecated syntax in JuliaLang/julia#8607.

This issue was filed because your testing status became worse. No additional issues will be filed if your package remains in this state, and no issue will be filed if it improves. If you'd like to opt-out of these status-change messages, reply to this message saying you'd like to and @IainNZ will add an exception. If you'd like to discuss PackageEvaluator.jl please file an issue at the repository. For example, your package may be untestable on the test machine due to a dependency - an exception can be added.

Test log:

>>> 'Pkg.add("TensorOperations")' log
INFO: Installing Cartesian v0.3.0
INFO: Installing TensorOperations v0.3.0
INFO: Package database updated
INFO: METADATA is out-of-date a you may not have the latest version of TensorOperations
INFO: Use `Pkg.update()` to get the latest versions of your packages

>>> 'using TensorOperations' log
Julia Version 0.4.0-dev+998
Commit e24fac0 (2014-10-07 22:02 UTC)
Platform Info:
  System: Linux (x86_64-unknown-linux-gnu)
  CPU: Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

>>> test log
ERROR: `Dict{K,V}` has no method matching Dict{K,V}(::UnitRange{Int64}, ::UnitRange{Int64})
 in indexin at array.jl:1151
 in tensorcopy at /home/idunning/pkgtest/.julia/v0.4/TensorOperations/src/tensorcopy.jl:11
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
 in include at ./boot.jl:245
 in include_from_node1 at loading.jl:128
 in process_options at ./client.jl:293
 in _start at ./client.jl:362
 in _start_3B_3789 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so
while loading /home/idunning/pkgtest/.julia/v0.4/TensorOperations/test/tests3.jl, in expression starting on line 7
while loading /home/idunning/pkgtest/.julia/v0.4/TensorOperations/test/runtests.jl, in expression starting on line 8

INFO: Testing TensorOperations
==========================[ ERROR: TensorOperations ]===========================

failed process: Process(`/home/idunning/julia04/usr/bin/julia /home/idunning/pkgtest/.julia/v0.4/TensorOperations/test/runtests.jl`, ProcessExited(1)) [1]

================================================================================
INFO: No packages to install, update or remove
ERROR: TensorOperations had test errors
 in error at error.jl:21
 in test at pkg/entry.jl:719
 in anonymous at pkg/dir.jl:28
 in cd at ./file.jl:20
 in cd at pkg/dir.jl:28
 in test at pkg.jl:68
 in process_options at ./client.jl:221
 in _start at ./client.jl:362
 in _start_3B_3789 at /home/idunning/julia04/usr/bin/../lib/julia/sys.so


>>> end of log

How can I perform star contraction?

In Einsum.jl

@einsum n[i,j,k] = hg[i,m]*hg[j,m]*hg[k,m]

defines a star shaped contraction, which will produce a rank 3 tensor.

However, @tensor in TensorOperations.jl will raise an error.
How can I define such kind of contraction correctly?

Best way to tensor reshaping

I often use @tensor macro and reshape together, e.g.:

@tensor A[a,e,c,g,f] := B[a,b] * C[b,c,d] * D[d,e,f,g]
A = reshape(A, size(A)[1]*size(A)[2]*size(A)[3], size(A)[4]*size(A)[5]) 

I think this code is inelegant and wasting memory. I want to use more elegant format like

@tensor A[(a,e,c),(g,f)] := B[a,b] * C[b,c,d] * D[d,e,f,g]

instead.
Or, some good way just I don't know already exists?

Version 0.6 support: Problem with @simd macro

When trying to run on 0.6, I get the following error:

julia> A = rand(4,4)
4×4 Array{Float64,2}:
 0.583756  0.913072  0.142562  0.409319 
...

julia> tensortrace(A,[1,1])
WARNING: Array{T}(::Type{T}, m::Int) is deprecated, use Array{T}(m) instead.
...
ERROR: MethodError: no method matching @simd(::LineNumberNode)
Closest candidates are:
  @simd(::LineNumberNode, ::ANY) at simdloop.jl:89
Stacktrace:
 [1] @generated body at /home/dan/.julia/v0.7/TensorOperations/src/implementation/recursive.jl:29 [inlined]
 [2] trace_rec!(::TensorOperations.One, ::TensorOperations.StridedData{1,Float64,:N}, ::TensorOperations.Zero, ::TensorOperations.StridedData{1,Float64,:N}, ::Tuple{Int64}, ::Int64, ::Int64, ::Tuple{Int64}) at /home/dan/.julia/v0.7/TensorOperations/src/implementation/recursive.jl:27
 [3] trace!(::Int64, ::Array{Float64,2}, ::Type{Val{:N}}, ::Int64, ::Array{Float64,0}, ::Array{Int64,1}, ::Array{Int64,1}, ::Array{Int64,1}) at /home/dan/.julia/v0.7/TensorOperations/src/implementation/stridedarray.jl:63
 [4] tensortrace(::Array{Float64,2}, ::Array{Int64,1}, ::Array{Int64,1}) at /home/dan/.julia/v0.7/TensorOperations/src/functions/simple.jl:54
 [5] tensortrace(::Array{Float64,2}, ::Array{Int64,1}) at /home/dan/.julia/v0.7/TensorOperations/src/functions/simple.jl:51

When the @simd code is taken out of _stridedloops in meta.jl, things work again. The culprit seems to be a LineNumber nodes which enter the expression.

Solving linear equations involving tensors

If you have a fixed tensor X and fix some indices, then contracting this tensor with another tensor along the given indices is a linear map on tensors (of each fixed size). It would be nice if this package provided a way to solve linear systems of this form. Technically, this should be no more difficult than inverting a matrix -- the request is just for a way to support solving the system within the tensor notation.

Export LabelList type

Hi,

Can the LabelList type be exported in indexnotation.jl?

I am writing functions that need to take a label list as an input. Currently I have to take a string, and then convert it. If I could accept a LabelList directly the code would be much cleaner.

A way to run a user-defined operation in the iteration

This is my use case, I have a matrix of matrices (actually in my case they are not really normal matrix but references to remote ones. it forms a block-distributed bigger matrix, but consider this as an example)

z = Any[rand(2, 2) for i=1:2, j=1:2]

@tensor t[i,j] := z[j,i]

This does the transpose of z, but does not transpose each submatrix, which is how one would transpose a block-distributed matrix.

Is it possible to specify something like:

z = Any[rand(2, 2) for i=1:2, j=1:2]

@tensor t[i,j] := transpose(z[j,i])

Ignore if this is out of scope of this project. I'm willing to work on making this work if you agree it will be good to allow more generic building block functions of which the current add! trace! and contract! are special cases (assuming it can be done without overhead)

howto use index notation so it compares to numpy.einsum

Thanks for your great effort. I just tried the staged branch (#8) on julia 0.4:

A = randn(2000, 10)
B = randn(2000, 10)
C = randn(2000, 10)

@tensor V[a,b,c] := A[a,k]*B[b,k]*C[c,k]

results in

LoadError: TensorOperations.IndexError{ASCIIString}("invalid contraction pattern: (:a,:b) * (:c,:k) to (:a,:b,:c)")

whereas in python/numpy

A = np.random.randn(200, 10)
B = np.random.randn(200, 10)
C = np.random.randn(200, 10)
V = np.einsum('ak,bk,ck->abc', A, B, C)

works.

I guess I did not fully understand the way TensorOperations works. Since the example above is the very popular PARAFAC/CANDECOMP product, it would make sense to include it in the readme.

How do you do inner product?

I'm casually playing around with this package and was wondering if there was a way to define the inner product.

x=rand(10);
y=rand(10);
@tensor z := x[i]*y[i]

doesn't work. Is this something that is out of scope of this package or simply not implemented yet?

Simple way of taking N-fold tensor product of tensor

Hello,

I'd like to have a function that for given n, calculates the n-fold tensor product of some tensor. It seems to me that there is no natural way to do this with the package at the moment (you could of course do a bit of metaprogramming to get the indices right, but that seems complicated to me). Any suggestions?

Thanks for your great work.

How do you do reducedim?

Julia's reducedim flattens a given dimension to be 1.

for example,

julia> reducedim(*,rand(10,10), 1)
1×10 Array{Float64,2}:
 4.13264  4.86747  4.45633  5.8891  …  4.03374  3.7842  4.86484  5.4814

Is it possible to write a @tensor expression to do this?

Warning for conjugation under julia 0.5

The following program contains a tensor contraction that involves complex conjugation of the A tensor by use of the ' operator:

using TensorOperations

function main()
    A = Complex128[i * im for i in 1:4]
    B = Complex128[i * j for i in 1:4, j in 1:3]
    @tensor v[j] := A[i]' * B[i,j]
    @show v (A' * B)
end

main()

When run, this program behaves correctly but outputs the following warning:

WARNING: the no-op `transpose` fallback is deprecated, and no more specific `transpose` method for TensorOperations.IndexedObject{(:i,),:N,Array{Complex{Float64},1},Int64} exists. Consider `permutedims(x, [2, 1])` or writing a specific `transpose(x::TensorOperations.IndexedObject{(:i,),:N,Array{Complex{Float64},1},Int64})` method if appropriate.

Errors in julia v1

The code that worked in julia < v1 now errors with

LoadError: LoadError: MethodError: no method matching 
mapreduce(::getfield(TensorOperations, Symbol("##199#201"))
{Dict{Any,TensorOperations.Power{:χ,Int64}},DataType}, ::typeof(*),
 ::TensorOperations.Power{:χ,Int64}, ::Array{Any,1})

Offending line is

@tensoropt newstate[m,n,mu,nu] = ut[m,n,1,2]*state[1,2,3,4]*utp[3,4,mu,nu]

Change in double indexing behaviour

I was relying on the following behavior in v0.6.0 (02a016b):

julia> using TensorOperations
julia> xs = [[1.0], [2.0]]
julia> ys = [[0.0], [0.0]]
julia> for i in 1:2; @tensor ys[i][a] = xs[i][a]; end
julia> print(ys)
Array{Float64,1}[[1.0], [2.0]]

It seems natural to me that only the final sets of brackets should be special inside @tensor. However, since 9ba9ef7, I get the following:

ERROR: TensorOperations.IndexError{String}("non-matching indices between left and right hand side: (ys[i])[a] = (xs[i])[a]")

Is this a regression?

Way to prevent contraction step?

The code below throws a "invalid contraction pattern" error, because the convention is to sum over j after multiplying. But what if I don't want to do this sum?

julia> using TensorOperations
julia> A = randn(2,2);
julia> B = randn(2,2);
julia> @tensor begin
           C[i,j,k] := A[i,j]*B[j,k]
       end

`@tensor`: ".=" as assignment

IIUC README.md, at the moment := or will create the new tensor, while = will do inplace assignment. This is different from the semantics of = in Julia 0.7/1.0.
Would it be possible to use .= (plus .+=, .-= etc) for inplace assignment, as in Julia Base, and = (+= etc) for assigning the new object?

add_native! not implemented

Hello, in the implementation of tensoradd, the function add_native! is called if no permutation is required but isn't implemented (or mentioned) anywhere. As far as I see this could be replaced with a simple add! to make it work.
(this is only a problem if the function is used directly - if the macro @tensor is called there's no problem).

Cheers and thanks for the great work!

Method errors with Flux.jl and Knet.jl

I'm trying to use TensorOperations to make a tensor regression. Basically it's a sequence of (in my case) 3rd-order tensor contractions. But I need to treat some of the tensors as if they were trainable parameters and then use gradient descent to optimize them. I get method errors with both knet and Flux because of these use their own wrapped versions of Arrays.

Here's an example of what I'm trying to do:

A = randn(28^2,100,100)
B = randn(100,100,10)
predict(input) = @tensor C[d,s] := (input[g,s] * A[g,b,c]) * B[b,c,d]
loss(x,y) = mean(abs2,y-predict(x))
lossgradient = grad(loss)

But I get an error with Knet trying to do gradient calculations:

julia> lossgradient(X, Y)
ERROR: MethodError: no method matching numind(::AutoGrad.Rec{Array{FixedPointNumbers.Normed{UInt8,8},2}})

Similar error when I use Flux.

Conditional expressions

I'm working on tensor differentiation package, and it turns out that derivative dz/dx of matrix multiplication z = x * y is a tensor of rank 4 with elements:

  1. dzdx[i, j, m, n] = y[n, j], if i == m.
  2. dzdx[i, j, m, n] = 0, otherwise.

Having that (i == m) may evaluate to 1 or 0, expressions above can also be simplified to:

dzdx[i, j, m, n] = (i ==m) * y[n, j]

Is anything like this supported or planned in TensorOperations.jl? If not, what parts of the package should I look at to add support for such operations - I'll be glad to contribute if I have enough information.

Errors with non-BLAS-type arrays

Hi Jutho, I'm seeing the following issue with 0.7.0 and TensorOps master:

a = [1;2;3]
b = [2 3 4; 5 6 7]
@tensor res[j] := b[j,i] * a[i]

results in

ERROR: MethodError: no method matching iterate(::Type{Val{:BLAS}})
Closest candidates are:
  iterate(::Any) at essentials.jl:853
  iterate(::Any, ::Any) at essentials.jl:847
  iterate(::Core.SimpleVector) at essentials.jl:583
  ...
Stacktrace:
 [1] start(::Type{Val{:BLAS}}) at ./essentials.jl:878
 [2] iterate(::Type) at ./essentials.jl:853
 [3] append_any(::Tuple{Int64}, ::Vararg{Any,N} where N) at ./essentials.jl:420
 [4] contract!(::Int64, ::Array{Int64,2}, ::Type{Val{:N}}, ::Array{Int64,1}, ::Type{Val{:N}}, ::Int64, ::Array{Int64,1}, ::Tuple{Int64}, ::Tuple{Int64}, ::Tuple{}, ::Tuple{Int64}, ::Tuple{Int64}, ::Type, ::Type{Val{:BLAS}}) at /home/amilsted/.julia/packages/TensorOperations/2n9c6/src/implementation/stridedarray.jl:88 (repeats 4 times)
 [5] top-level scope at /home/amilsted/.julia/packages/TensorOperations/2n9c6/src/indexnotation/tensormacro.jl:447

The above works fine with Float arrays, but fails with integers, rationals, etc.

Tensor unfoldings?

Do you plan to support tensor unfoldings in this package?

(Or is there a simple way to implement this using the current functionality of this package...? Excuse my ignorance if so.)

Garbage Collection Time

Hi, Jutho,

I used TensorOperations alot mainly for my iPEPS code and related. Recently, I want to optimize my code, I found that the garbage collection time for tensor contractions is really high, especially for high rank and large dimension tensors. So, I was wondering, is there some general ways to reduce gc time within TensorOperations? Thanks very much.

Sincerely
Wangwei Lan

Question: How are brackets handled in @tensoropt?

Hi Jutho! I was wondering how brackets are treated by the @tensoropt macro. I assume explicit brackets do not affect the contraction order in case of a single tensor network, but what happens if I have a sum of tensor networks in a single expression? Is the ordering of sums and multiplications also chosen optimally?

Add reference for optimal tensor contraction algorithm

I'm interested in the optimization algorithm and would like to see the reference if it is published. Also, is the optimized contraction order globally optimal? I see in the README:

Furthermore, there is a @tensoropt macro which will optimize the contraction order to minimize the total number of multiplications (cost model might change or become choosable in the future). The optimal contraction order will be determined at compile time and will be hard coded in the macro expansion. The cost/size of the different indices can be specified in various ways, and can be integers or some arbitrary polynomial of an abstract variable, e.g. χ. In the latter case, the optimization assumes the assymptotic limit of large χ.

@tensoropt D[a,b,c,d] := A[a,e,c,f]*B[g,d,e]*C[g,f,b]
# cost χ for all indices (a,b,c,d,e,f)
@tensoropt (a,b,c,e) D[a,b,c,d] := A[a,e,c,f]*B[g,d,e]*C[g,f,b]
# cost χ for indices a,b,c,e, other indices (d,f) have cost 1
@tensoropt !(a,b,c,e) D[a,b,c,d] := A[a,e,c,f]*B[g,d,e]*C[g,f,b]
# cost 1 for indices a,b,c,e, other indices (d,f) have cost χ
@tensoropt (a=>χ,b=>χ^2,c=>2*χ,e=>5) D[a,b,c,d] := A[a,e,c,f]*B[g,d,e]*C[g,f,b]
# cost as specified for listed indices, unlisted indices have cost 1 (any symbol for χ can be used)

and did not find any reference in the source code, either.

How to cite TensorOperations.jl

Hi, Jutho,

I mainly used TensorOperations in my projects, and I was wondering how can I cite it in a paper. Is there are format to do so? Thanks very much.

Best regards
Wangwei Lan

Helping out

This is not the most useful way to say this, but I am interested in trying to help out this package.
I am looking to help out with documentation (so I can build up some skills.)
Any chance you could give useful pointers?

iterate(::Nothing) on Julia v0.7-beta2

Hello, I just copy/paste the example on the introduction page and get the following error:

julia> @tensor begin
           D[a,b,c] = A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b]
           E[a,b,c] := A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b]
       end
┌ Warning: `equalto(x)` is deprecated, use `isequal(x)` instead.
│   caller = unique2(::NTuple{6,Symbol}) at unique2.jl:15
└ @ TensorOperations ~/.julia/packages/TensorOperations/q7ls/src/auxiliary/unique2.jl:15
ERROR: MethodError: no method matching iterate(::Nothing)
Closest candidates are:
  iterate(::Any) at essentials.jl:821
  iterate(::Any, ::Any) at essentials.jl:816
  iterate(::Core.SimpleVector) at essentials.jl:552
  ...
Stacktrace:
 [1] start(::Nothing) at ./essentials.jl:843
 [2] iterate at ./essentials.jl:821 [inlined]
 [3] _deleteat!(::Array{Symbol,1}, ::Nothing) at ./array.jl:1194
 [4] deleteat!(::Array{Symbol,1}, ::Nothing) at ./array.jl:1189
 [5] unique2(::NTuple{6,Symbol}) at /Users/spc/.julia/packages/TensorOperations/q7ls/src/auxiliary/unique2.jl:21
 [6] #s135#161 at /Users/spc/.julia/packages/TensorOperations/q7ls/src/indexnotation/indexedobject.jl:37 [inlined]
 [7] #s135#161(::Any, ::Any, ::Any, ::Any, ::Any, ::Any) at ./none:0
 [8] (::Core.GeneratedFunctionStub)(::Any, ::Vararg{Any,N} where N) at ./boot.jl:506
 [9] #s135#171 at /Users/spc/.julia/packages/TensorOperations/q7ls/src/indexnotation/product.jl:27 [inlined]
 [10] #s135#171(::Any, ::Any, ::Any) at ./none:0
 [11] (::Core.GeneratedFunctionStub)(::Any, ::Vararg{Any,N} where N) at ./boot.jl:506
 [12] top-level scope at REPL[9]:2

Does anyone else experience the same issue? I am looking forward to using this package on new version of Julia. Thank you!

Introducing `.=` syntax in `@tensor` blocks

Following JuliaLang/julia#23014 (comment) and related discussion, I wonder if it might be appropriate to deprecate the use of = within @tensor blocks and instead introduce and encourage the use of .=. Regardless of the outcome of the discussion in base julia, I believe this would be a more natural use of the operators, as it would certainly help me remember which syntax allocates and which does not. After some transition period, = could be removed and then perhaps later re-introduced for the allocating version (but this could all be decided at a later time).

[PackageEvaluator.jl] Your package TensorOperations may have a testing issue.

This issue is being filed by a script, but if you reply, I will see it.

PackageEvaluator.jl is a script that runs nightly. It attempts to load all Julia packages and run their test (if available) on both the stable version of Julia (0.2) and the nightly build of the unstable version (0.3).

The results of this script are used to generate a package listing enhanced with testing results.

The status of this package, TensorOperations, on...

  • Julia 0.2 is 'Package doesn't load.' PackageEvaluator.jl
  • Julia 0.3 is 'Tests pass.' PackageEvaluator.jl

'No tests, but package loads.' can be due to their being no tests (you should write some if you can!) but can also be due to PackageEvaluator not being able to find your tests. Consider adding a test/runtests.jl file.

'Package doesn't load.' is the worst-case scenario. Sometimes this arises because your package doesn't have BinDeps support, or needs something that can't be installed with BinDeps. If this is the case for your package, please file an issue and an exception can be made so your package will not be tested.

This automatically filed issue is a one-off message. Starting soon, issues will only be filed when the testing status of your package changes in a negative direction (gets worse). If you'd like to opt-out of these status-change messages, reply to this message.

einsum

Would this be a good place to implement something like this?

Potential performance improvement?

Hi, I've been playing with TensorOperations and it seems pretty nice. I've noticed it's a bit slower (3-4x, Julia Version 0.4.0-dev+3052) than manual loops in some situations, probably because it allocates more memory. Maybe there's some potential to improve performances there:

http://pastebin.com/HkKU9fCp

@tensor allocates 3x+ memory than numpy.einsum

I did some benchmarks for calculating a classical tensor network task. Julia is faster than numpy indeed, but the memory allocation is much more than numpy's implementation. I wonder if I did something wrong? or this is because the implementation?

Python
# einsum.py
import numpy as np

def propagate(LHS, X, Y):
    P = np.einsum('ijk, ipq', LHS, X)
    Q = np.einsum('jkqp, jvrq', P, Y)
    R = np.einsum('kprv, kvm', Q, X)
    return R
In [1]: import numpy as np

In [2]: LHS = np.random.randn(200, 10, 200)
   ...: X = np.random.randn(200, 2, 200)
   ...: Y = np.random.randn(10, 2, 10, 2)

In [3]: tr = tracker.SummaryTracker()

In [4]: %timeit propagate(LHS, X, Y)
314 ms ± 1.57 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [5]: tr.print_diff()
                                                                    types |   # objects |   total size
========================================================================= | =========== | ============
                                                             <class 'list |        7197 |    679.08 KB
                                                              <class 'str |        7192 |    516.33 KB
                                                              <class 'int |        1462 |     39.98 KB
                                                             <class 'dict |           4 |      2.31 KB
  <class 'prompt_toolkit.terminal.vt100_input._IsPrefixOfLongerMatchCache |           0 |      1.50 KB
                                                <class 'method_descriptor |          12 |    864     B
                                <class 'traitlets.traitlets.MetaHasTraits |           0 |    768     B
                                                            <class 'tuple |          10 |    696     B
                       <class 'prompt_toolkit.document._ImmutableLineList |           6 |    672     B
                                                 <class 'weakref.KeyedRef |           6 |    528     B
                                                <class 'collections.deque |           0 |    528     B
                                                 <class '_sre.SRE_Pattern |           3 |    512     B
                           <class 'prompt_toolkit.document._DocumentCache |           6 |    336     B
                                                          <class 'weakref |           4 |    320     B
                                                             <class 'type |           0 |    288     B



Julia
using TensorOperations
using BenchmarkTools

function propagate(LHS, X, Y)
    @tensor R[6,7,8] := LHS[1,2,3]*X[1,4,6]*Y[2,5,7,4]*X[3,5,8]
end

LHS = randn(200, 10, 200); X = randn(200, 2, 200); Y = randn(10, 2, 10, 2);
@benchmark propagate($LHS, $X, $Y)
BenchmarkTools.Trial: 
  memory estimate:  27.48 MiB
  allocs estimate:  115
  --------------
  minimum time:     25.434 ms (1.92% GC)
  median time:      26.101 ms (1.87% GC)
  mean time:        26.656 ms (3.60% GC)
  maximum time:     100.702 ms (74.69% GC)
  --------------
  samples:          188
  evals/sample:     1

Multi-dimensional array contraction code not working

Hi Jutho,

The following code used to work now seems not working in the latest version. So, I was wondering is there a easier way to solve the problem. Please see the code below. Thanks very much.

C = Array{Array{Float64}}(undef,2,2);
C[1,1] = rand(10,10);
C[1,2] = rand(10,10);
C[2,1] = rand(10,10);
C[2,2] = rand(10,10);
@tensor temp[:] := C[1,1][-1,1]*C[2,1][-2,1]

Best regards
Wangwei Lan

Contraction not working between diagonal matrices

Hello,

I've been enjoying using this great package, but I came across the following behavior.
I cannot multiply two diagonal matrices. A minimal example is as follows. My julia version is 1.1.

using TensorOperations
using LinearAlgebra

A = Diagonal(rand(10))
B = Diagonal(rand(10))
C = rand(10,10)

@tensor C[i, j] = A[i, k] * B[k, j]

This produces the following error

ERROR: MethodError: TensorOperations.contract!(::Int64, ::Diagonal{Float64,Array{Float64,1}}, ::Symbol, ::Diagonal{Float64,Array{Float64,1}}, ::Symbol, ::Bool, ::Array{Float64,2}, ::Tuple{Int64}, ::Tuple{Int64}, ::Tuple{Int64}, ::Tuple{Int64}, ::Tuple{Int64,Int64}, ::Tuple{Symbol,Symbol,Symbol}) is ambiguous. Candidates:
  contract!(α, A::Diagonal, CA::Symbol, B::AbstractArray, CB::Symbol, β,
C::AbstractArray, oindA::Tuple{Vararg{Int64,N}} where N, cindA::Tuple{Vararg{Int64,N}} where N, oindB::Tuple{Vararg{Int64,N}} where N, cindB::Tuple{Vararg{Int64,N}} where N, indCinoAB::Tuple{Vararg{Int64,N}} where N, syms::Union{Nothing, Tuple{Symbol,Symbol,Symbol}}) in TensorOperations at C:\Users\tsoej\.julia\packages\TensorOperations\QzAsU\src\implementation\diagonal.jl:194
  contract!(α, A::AbstractArray, CA::Symbol, B::Diagonal, CB::Symbol, β,
C::AbstractArray, oindA::Tuple{Vararg{Int64,N}} where N, cindA::Tuple{Vararg{Int64,N}} where N, oindB::Tuple{Vararg{Int64,N}} where N, cindB::Tuple{Vararg{Int64,N}} where N, indCinoAB::Tuple{Vararg{Int64,N}} where N, syms::Union{Nothing, Tuple{Symbol,Symbol,Symbol}}) in TensorOperations at C:\Users\tsoej\.julia\packages\TensorOperations\QzAsU\src\implementation\diagonal.jl:63
Possible fix, define
  contract!(::Any, ::Diagonal, ::Symbol, ::Diagonal, ::Symbol, ::Any, ::AbstractArray, ::Tuple{Vararg{Int64,N}} where N, ::Tuple{Vararg{Int64,N}}
where N, ::Tuple{Vararg{Int64,N}} where N, ::Tuple{Vararg{Int64,N}} where N, ::Tuple{Vararg{Int64,N}} where N, ::Union{Nothing, Tuple{Symbol,Symbol,Symbol}})

Is this an expected behavior or can it be fixed?

Non-associativity of (some?) contractions

I have written the same contraction of tensors in various ways (by explicitly associating factors into other variables, or by using parentheses), but some compile and execute without a problem (the one based on result1 through result4), while others throw an exception (result5 and result6).

using TensorOperations

plus = 1/sqrt(2)*[1; 1];
CNOT = [1 0 0 0; 0 1 0 0; 0 0 0 1; 0 0 1 0];
cnot = reshape(CNOT,(2,2,2,2));

@tensor result1[a1,ai1,b1] := cnot[a1,b1,ai1,b_] * plus[b_];
@tensor result2[a1,a2,ai1,ai2,b2] := cnot[a2,b2,ai2,b1] * result1[a1,ai1,b1];
@tensor result3[a1,a2,ai1,ai2,b2] := cnot[a2,b2,ai2,b1] * (cnot[a1,b1,ai1,b_] * plus[b_]);
@tensor result4[a1,a2,ai1,ai2,b2] := (cnot[a2,b2,ai2,b1] * plus[b_]) * cnot[a1,b1,ai1,b_] ;
@tensor result5[a1,a2,ai1,ai2,b2] := (cnot[a2,b2,ai2,b1] * cnot[a1,b1,ai1,b_]) * plus[b_];
@tensor result6[a1,a2,ai1,ai2,b2] := (cnot[a1,b1,ai1,b_] * cnot[a2,b2,ai2,b1]) * plus[b_];

The resulting error message is

LoadError: MethodError: `contract!` has no method matching contract!(::Int64, ::Array{Int64,4}, ::Type{Val{:N}}, ::Array{Int64,4}, ::Type{Val{:N}}, ::Int64, ::Array{Int64,6}, ::Array{Int64,1}, ::Array{Int64,1}, ::Array{Int64,1}, ::Array{Int64,1}, ::Array{Int64,1}, ::Val{:native})
Closest candidates are:
  contract!{CA,CB}(::Any, ::Union{DenseArray{T,N},SubArray{T,N,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}}, ::Type{Val{CA}}, ::Union{DenseArray{T,N},SubArray{T,N,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}}, ::Type{Val{CB}}, ::Any, ::Union{DenseArray{T,N},SubArray{T,N,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}}, ::Any, ::Any, ::Any, ::Any, ::Any)
  contract!{CA,CB}(::Any, ::Union{DenseArray{T,N},SubArray{T,N,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}}, ::Type{Val{CA}}, ::Union{DenseArray{T,N},SubArray{T,N,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}}, ::Type{Val{CB}}, ::Any, ::Union{DenseArray{T,N},SubArray{T,N,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}}, ::Any, ::Any, ::Any, ::Any, ::Any, !Matched::Type{Val{:native}})
  contract!{CA,CB,TC<:Union{Complex{Float32},Complex{Float64},Float32,Float64}}(::Any, ::Union{DenseArray{T,N},SubArray{T,N,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}}, ::Type{Val{CA}}, ::Union{DenseArray{T,N},SubArray{T,N,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}}, ::Type{Val{CB}}, ::Any, !Matched::Union{DenseArray{TC<:Union{Complex{Float32},Complex{Float64},Float32,Float64},N},SubArray{TC<:Union{Complex{Float32},Complex{Float64},Float32,Float64},N,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64}}}},LD}}, ::Any, ::Any, ::Any, ::Any, ::Any)
  ...
while loading In[10], in expression starting on line 9

 in contract! at /home/msilva/.julia/v0.4/TensorOperations/src/implementation/stridedarray.jl:183
 in deindexify! at /home/msilva/.julia/v0.4/TensorOperations/src/indexnotation/product.jl:71
 in deindexify at /home/msilva/.julia/v0.4/TensorOperations/src/indexnotation/product.jl:53
 in * at operators.jl:103

which is cryptic enough that I can't tell if this is a bug or just a limitation of TensorOperations. I have other code with much longer contractions (with many different terms) in a single @tensor call, so I hazard to guess this may be a bug or some subtlety I am blind to at the moment.

The performance of optimaltree

Consider the problem of the optimal contraction ordering of Fullerene structured tensor network with 60 nodes and 90 edges, its contraction complexity is only approximately 10. Using optimaltree function, I did not find the optimal contraction ordering in several minutes.
image

@Jutho
Do you think setting the contraction complexity upper bound would help truncating the search space?

maxcost = addcost(mulcost(reduce(mulcost, allcosts; init=one(costtype)), maximum(allcosts)), zero(costtype)) # add zero for type stability: Power -> Poly

Network data

(1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, 12), (13, 14, 15), (7, 16, 17), (18, 19, 20), (10, 21, 22), (23, 24, 25), 
(1, 16, 26), (27, 28, 29), (4, 21, 30), (13, 31, 32), (18, 31, 33), (26, 34, 35), (30, 36, 37), (38, 39, 40),
 (38, 41, 42), (39, 43, 44), (41, 45, 46), (47, 48, 49), (50, 51, 52), (23, 43, 53), (27, 45, 53), (32, 54, 55), 
(33, 56, 57), (34, 58, 59), (36, 60, 61), (62, 63, 64), (65, 66, 67), (17, 47, 68), (22, 50, 69), (48, 62, 70), 
(51, 63, 71), (54, 72, 73), (56, 72, 74), (73, 75, 76), (74, 77, 78), (75, 79, 80), (77, 79, 81), (2, 44, 82), 
(5, 46, 83), (8, 55, 84), (11, 57, 85), (70, 86, 87), (71, 86, 88), (68, 82, 89), (69, 83, 90), (35, 76, 84),
 (37, 78, 85), (58, 65, 80), (60, 66, 81), (24, 28, 67), (40, 87, 89), (42, 88, 90), (14, 19, 64), (9, 15, 49),
 (12, 20, 52), (3, 25, 59), (6, 29, 61)

StaticArrays

This must have been asked before but I couldn’t find a reference: the package doesn’t seem to support StaticArrays, is there a plan / possibility to do so?

Contractions not working when mixing dense and sparse arrays

It seems like contractions are not defined between dense and sparse arrays. I imagined that both would work since I thought they are StridedArrays?

A related question: the reason I found out about this is because I am doing a calculation where I need to contract tensors of higher rank with diagonal matrices. Since the dimension is large I would not want to represent the whole matrix. To keep with Einstein convention and still be efficient with RAM is the issue for me. I tried implementing the functions in stridedarray.jl and will keep trying to do that. Is there a simpler solution that you know of?

weird UndefRefError when trying to transpose a matrix of strings

julia> mx = ["$i-$j" for i= 1:10, j=1:10];

julia> @tensor t[i,j] := mx[j,i]
ERROR: UndefRefError: access to undefined reference
 [inlined code] from simdloop.jl:73
 in add_micro! at /home/shashi/.julia/v0.4/TensorOperations/src/implementation/kernels.jl:6
 in add_rec! at /home/shashi/.julia/v0.4/TensorOperations/src/implementation/recursive.jl:13
 in add! at /home/shashi/.julia/v0.4/TensorOperations/src/implementation/stridedarray.jl:25
 in deindexify! at /home/shashi/.julia/v0.4/TensorOperations/src/indexnotation/indexedobject.jl:57
 in deindexify at /home/shashi/.julia/v0.4/TensorOperations/src/indexnotation/indexedobject.jl:47

But

julia> mx = Any["$i-$j" for i= 1:10, j=1:10];

       @tensor t[i,j] := mx[j,i]

10x10 Array{Any,2}:
    ...

Works!

It's okay if it's a finicky detail that you don't plan on fixing.

foldername aux doesn't work in windows

Hello

For historical reasons, windows doesn't permit to create a folder with name aux.

For this reason, Pkg.add("TensorOperations") gives an error on a windows machine.

So I downloaded TensorOperations.jl-master.zip directly from github.com and in the C:\Users....julia\v0.4\TensorOperations\src folder I had to manually create an aux1 folder and copy-paste the files from the .zip there and in the TensorOperations.jl file I changed

include("aux1/axpby.jl") aux1 and not aux
etc.

and thanks a lot for making publicly available this great peace of software

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.