Comments (2)
There is some speed advantage with using @tensor
, but it is not that substantial. It arises specifically from the fact that @tensor
only works if the contraction is exactly known at compile time, and the different steps can therefore be hardcoded. That's what the macro does, it does some code rewriting at compile time.
If the contraction requires dynamic information, which is not known at compile time, you cannot use @tensor
for that.
from tensoroperations.jl.
If you really need the speed and you don't have a lot of different contractions you can call the macro from a generated function. The generated function can dynamically dispatch by handing it a Val type.
Generated functions have to compile a function for every call of different type they get. This means that each contraction with a different signature would have to be recompiled. So using this approach with rapidly changing contraction orders will introduce a lot of compilation overhead.
Here is an example, handing the operators the indices (as Tuples of Ints):
using TensorOperations
@generated function contract_operator!(S,
A,
B,
op,
order_A::Val{K}) where {K}
leftside = Expr(:call, :*,
:(A[$(K.a...)]),
:(B[$(K.b...)]),
:(op[$(K.op...)]))
return :(@tensor S[:] = $leftside)
end
S = rand(5,5,5,2,5,5,5,2)
a = rand(5,5,5,5,2)
b = rand(5,5,5,5,2)
op = rand(2,2,2,2)
contraction_order = Val((a = (1,-1,-2,-3,2),
b = (-5,-6,1,-7,3),
op = (2,3,-4,-8)))
contract_operator!(S,a,b,op, contraction_order);
from tensoroperations.jl.
Related Issues (20)
- possible memory leak with metaprogramming
- Why drop caching Tensors? HOT 3
- Is TensorOperations able to take advantage of symmetry in the output? HOT 8
- Manual allocation strategy HOT 2
- Floating Point Accuracy of @tensor results with CUDA HOT 3
- Enable multithreads when doing the permutedims in the TTGT algorithms HOT 2
- Unexpected `DimensionMismatch` (v4.0.2 -> v4.0.3) HOT 3
- Wrong result with subnetworks with equal labels HOT 2
- Bug in CUDA backend HOT 6
- Unintuitive `ncon` result when scalar HOT 2
- Taking gradients of traces HOT 6
- np.einsum_path vs TensorOperations HOT 3
- `ncon` fails with AD HOT 2
- `tensortrace` not working on Arrays of Symbolic Expressions from Symbolics.jl. HOT 2
- Combining LinearAlgebra.Diagonal with a CuArray inside @tensor HOT 2
- Compability with CUDA 5.2 HOT 3
- Confusion when using cuTENSOR HOT 4
- cuTENSOR not working with automatic differentiation HOT 5
- Freed reference problem when combining cuTENSOR and Zygote HOT 8
- TensorOperationscuTENSORExt fails to compile HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tensoroperations.jl.