Comments (11)
By the way, I suspect (although I have not checked) that using the proposed environments()
has the same asymptotic computational time scaling as using tf.gradients()
. Hence it might only be advantageous for backends without autodiff functionality. On the other hand, environments()
might be easier to fit into any upcoming automatic multi-device support?
from tensornetwork.
I think remove_node()
is a great idea. It would reduce the length of MERA code (and also others) a lot. It would be good to also have the automated optimal contractor then. @Thenerdstation what is the status on that? I saw the stochastic contractor. Is someone working on a deterministic optimal cotractors?
from tensornetwork.
Adam is working on it, but he is a 20%er. I've asked him to add a branch of his work so far, so hopefully that'll be added soon.
from tensornetwork.
Remove node should be easy enough to add. Though I think we'll use disconnect
to break the edges rather than modify the edges in place. This is consistent with the rest of the code base.
from tensornetwork.
@Thenerdstation The only trouble I see with replacing the connected edges is that it makes it tricky to keep track of them across the remove_node()
. In particular, I would want to know which of the new dangling edges was connected to which axis of the removed node. Ideally, I would not have to remember the axis ordering of the removed node for this to work - the axis names would be enough.
from tensornetwork.
Could you give a tiny example of what your ideal workflow would look like? I still don't quite see how modifying the edges in place is more beneficial than just replacing them.
from tensornetwork.
Something like:
net = TensorNetwork()
n1 = net.add_node(t1, axis_names=['a', 'b', 'c'])
n2 = net.add_node(t2, ...)
...
# (connect edges so that there are no danglings)
...
output_edges = [n1['a'], n1['b'], n1['c']]
net.remove_node(n1)
net.contract_all_naively() # or whatever :)
env = net.get_final_node()
env.reorder_edges(output_edges) # want my output_edges to still be part of the network
# I might also want to contract the environment with another node.
# The following replaces the removed node to reproduce the result
# of contracting the original network.
n1 = net.add_node(t1, axis_names=['a', 'b', 'c'])
net.connect(n1['a'], output_edge[0])
net.connect(n1['b'], output_edge[1])
net.connect(n1['c'], output_edge[2])
net.contract_between(n1, env)
If the edges are replaced, one could alternatively have something like
output_edges = net.remove_node(n1, output_edge_axes=['a', 'b', 'c'])
, I suppose.
from tensornetwork.
Oh I see what you mean now. You will need the broken edges for reshaping and to possibly connect it to a different node.
Let me think about this some more. I wanna keep the API as clean and intuitive as possible.
from tensornetwork.
Alright this is my compromise. remove_node()
will return a Dict[Union[int, Text], Edge]
which maps the index/axis name to the newly broken edge. That way, we can use disconnect
and keep our current edge pattern consistent and you can continue to use the edge mapping that node had before.
from tensornetwork.
@amilsted Please comment on the PR if you have any concerns about the design choice.
from tensornetwork.
@mganahl You too.
from tensornetwork.
Related Issues (20)
- Flaky Test when seed not used HOT 5
- Flaky test (always fails) when seeds are removed HOT 1
- Test `test_gmres_on_larger_random_problem` fails without seeds
- symmetric backend very slow for PEPS tensors HOT 4
- Home Network
- lack sqrt operation in tensornetwork/matrixproductstates/infinite_mps.py
- TensorNetwork backend for QuTiP. HOT 2
- Bug for numpy backend ``sum`` method HOT 3
- SVD on jax backend and thus ``split_node`` cannot be jitted when ``max_truncation_err`` is set
- Bug of setting `center_position` in `apply_two_site_gate` when there's no truncation
- Quantum hardware system integration with TensorNetwork
- Missing code for TensorNetwork Machine Learning HOT 1
- `backend.item` in MPS calculation is incompatible with autograd in jax HOT 2
- The lack of tensor-train RNNs for latest tf/keras HOT 1
- Parallelism Contractors HOT 1
- Question: Vector to FiniteMPS?
- Is there a simple way to multiply a scalar to the tensor values of a node that is part of a network of tensors? HOT 2
- Maintenance of this repository HOT 3
- Tensor
- Pf
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tensornetwork.