Coder Social home page Coder Social logo

jcmgray / cotengra Goto Github PK

View Code? Open in Web Editor NEW
176.0 5.0 32.0 10.72 MB

Hyper optimized contraction trees for large tensor networks and einsums

Home Page: https://cotengra.readthedocs.io

License: Apache License 2.0

Python 96.12% Rust 3.88%
tensor-networks tensor-contraction tensor-network quimb opt-einsum einsum tensor

cotengra's Introduction

cotengra

tests codecov Codacy Badge Docs PyPI Anaconda-Server Badge

cotengra is a python library for contracting tensor networks or einsum expressions involving large numbers of tensors - the main docs can be found at cotengra.readthedocs.io. Some of the key feautures of cotengra include:

  • drop-in einsum replacement
  • an explicit contraction tree object that can be flexibly built, modified and visualized
  • a 'hyper optimizer' that samples trees while tuning the generating meta-paremeters
  • dynamic slicing for massive memory savings and parallelism
  • support for hyper edge tensor networks and thus arbitrary einsum equations
  • paths that can be supplied to numpy.einsum, opt_einsum, quimb among others
  • performing contractions with tensors from many libraries via cotengra, even if they don't provide einsum or tensordot but do have (batch) matrix multiplication

cotengra

cotengra's People

Contributors

emprice avatar jcmgray avatar tabasavr avatar z-y00 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

cotengra's Issues

Warning for cotengra

Hi everyone,
I get the following error after running the following codes:
import tqdm ZZ = qu.pauli('Z') & qu.pauli('Z') local_exp_rehs = [ circ_ex.local_expectation_rehearse(weight * ZZ, edge, optimize=opt) for edge, weight in tqdm.tqdm(list(terms.items())) ]

/Users/myenv_quimb/lib/python3.9/site-packages/cotengra/hyper_optuna.py:21: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use :func:~optuna.trial.Trial.suggest_float instead.
return lambda trial: trial.suggest_loguniform(

Could anyone help me to resolve it? I installed the latest version of Cotengra and my Python version is 3.9.14. Thanks

'str' object has no attribute 'difference', when running Quantum Circuit Example.ipynb

What is your issue?

Apart from exactly running the contents of the ipynb file, I have run the following lines of code:

!pip install -U git+https://github.com/jcmgray/cotengra.git

!pip install -r requirements.txt

requirements.txt involves :
numpy
scipy
numba
cytoolz
tqdm
psutil
opt_einsum
autoray
matplotlib
networkx
slepc4py
slepc
petsc4py
petsc
mpi4py

!pip install quimb
pip install python-igraph
pip install kahypar

I encountered the above error on the following line : info = tn.contract(all, optimize=opt, get='path-info')
which linked to G.add_vertex(str(i), inds=term.difference(output), weight=nweight) in path_igraph.py.
Could you please help me resolve it?

Can not set HyperOptimizer.parallel to False

Currently, I am using Cotengra to contract tensor network. Because of the high memory cost during the process of HyperOptiumizer, I set parallel=False. But I find that it did not turn off the parallel processes. Can anyone help me with this problem?

parsing files into tensor networks

I was wondering if there exists functionality in cotengra or quimb or opt_einsum to parse a file for the purposes of calculating contraction costs alone, without needing or caring about the elements contained within the tensors themselves.

For instance, networkx has a read_edgelist function and kahypar has a way to parse files in the hMetis format to build hypergraphs. One or both of these could be used to construct tensor networks with arbitrary elements where the aim is to optimize contraction costs.

If something like this doesn't exist, I'd be interested in working on it. Please advise.

Possibly a bug in line 200 of hyper.py

For example, if methods="greedy", after the following line,
self._methods = DEFAULT_METHODS if methods is None else list(methods)
self._methods would become ['g','r','e','e','d','y'] which is not expected.
I think line 200 should be modified as
self._methods = DEFAULT_METHODS if methods is None else [methods]

Optuna depreciation warnings: `suggest_loguniform`

Optuna seems to have deprecated a couple of features in v3.0.0, which results in warnings like
.../cotengra/hyper_optuna.py:21: FutureWarning: suggest_loguniform has been deprecated in v3.0.0. This feature will be removed in v6.0.0. See https://github.com/optuna/optuna/releases/tag/v3.0.0. Use :func:`~optuna.trial.Trial.suggest_float` instead. caused by this:

return lambda trial: trial.suggest_loguniform(

Instructions for building the rust lib

I noticed the Cargo.toml and src/lib.rs in the repository.

I am able to build a wheel with it by changing the build-backend from setuptools to maturin and changing the version in Cargo.toml

Any reasons why I can't find instructions on it and you don't publish an optimized version on pypi or conda-forge?

Performance improvement from 2002.01935

Hi!

Thank you for creating a very important research tool for quantum tensor network simulation. I am very curious about the cotengra performance in your paper 2002.01935. I am using circuit_n53_m20_s0_e0_pABCDCDAB.qsim with slicing, and I am getting a cost of $10^{19}$ with largest intermediate size $2^{30}$. This is significantly better than what is reported in Fig. 10. I haven’t tried swapped sycamore and I don’t know if that will improve things further. Is it the case that the software is significantly improved since the paper, or am I missing something?

Regards,
Henry

Error computing path cost

What happened?

Cotengra fails to compute the cost of a given path:

math.log2(trial['write']) / 1000

when the "write" value is negative.

Minimal Complete Verifiable Example

pyenv local 3.8.12
python3 -m venv quimb_env
. ./quimb_env/bin/activate 
pip3 install opt-einsum jax jaxlib numba autoray kahypar mypy optax psutil tqdm optuna cytoolz matplotlib

mkdir src
cd src
git clone https://github.com/jcmgray/quimb.git
cd quimb/
git checkout 5b30302
cd ..
git clone https://github.com/jcmgray/cotengra.git
cd cotengra/
git checkout c3d7d34
cd ../..

export PYTHONPATH="./src/quimb:./src/cotengra"

unzip bug_report_code.zip
python3 bug_report.py

The example script bug_report.py and input dataset are in the attached bug_report_code.zip.

Relevant log output

Please note: the issue is intermittent, so you may need to run the script several times before the error occurs.

> python3 bug_report.py 

log2[SIZE]: 7.00 log10[FLOPs]: 6.17:  80%|████████████████████████████████████████████████████████████████████████████████████████████████████▊                         | 8/10 [00:02<00:00,  3.21it/s]
concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/home_dir/.pyenv/versions/3.8.12/lib/python3.8/concurrent/futures/process.py", line 239, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/test_area/src/cotengra/cotengra/hyper.py", line 269, in __call__
    trial['score'] = self.score_fn(trial)**self.score_compression
  File "/test_area/src/cotengra/cotengra/core.py", line 2635, in score_flops
    math.log2(trial['write']) / 1000 +
ValueError: math domain error
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "bug_report.py", line 42, in <module>
    ev_term = circuit.local_expectation(obs_ops, [0, 1], **options)
  File "/test_area/src/quimb/quimb/tensor/circuit.py", line 1734, in local_expectation
    info = rhoG.contract(
  File "/test_area/src/quimb/quimb/tensor/tensor_core.py", line 6458, in contract
    return tensor_contract(*self, **opts)
  File "/test_area/src/quimb/quimb/tensor/tensor_core.py", line 218, in tensor_contract
    pathinfo = get_contractor(eq, *shapes, get='info', **contract_opts)
  File "/test_area/src/quimb/quimb/tensor/contraction.py", line 269, in get_contractor
    path = path_fn(eq, *shapes, optimize=optimize, **kwargs)
  File "/test_area/src/quimb/quimb/tensor/contraction.py", line 103, in _get_contract_path
    path = optimize(inputs, output, size_dict, **kwargs)
  File "/test_area/src/cotengra/cotengra/hyper.py", line 648, in __call__
    self._search(inputs, output, size_dict,)
  File "/test_area/src/cotengra/cotengra/hyper.py", line 603, in _search
    for trial in trials:
  File "/test_area/quimb_env/lib/python3.8/site-packages/tqdm/std.py", line 1195, in __iter__
    for obj in iterable:
  File "/test_area/src/cotengra/cotengra/hyper.py", line 551, in _gen_results_parallel
    yield self._get_and_report_next_future()
  File "/test_area/src/cotengra/cotengra/hyper.py", line 529, in _get_and_report_next_future
    trial = future.result()
  File "/home_dir/.pyenv/versions/3.8.12/lib/python3.8/concurrent/futures/_base.py", line 437, in result
    return self.__get_result()
  File "/home_dir/.pyenv/versions/3.8.12/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
    raise self._exception
ValueError: math domain error

Other information
The error may be caused by changes to the tree performed in the call tree.subtree_reconfigure_(**self.opts) at line 191 in hyper.py. Prior to this call the "write" size is positive, but it is negative after this call.

Environment
Python 3.8
Fedora 35
Quimb and Cotengra commits as noted above (in both cases they are the latest commits at the time of logging this issue)

Symbol not found: __PyThreadState_Current

In an attempt to run the Sycamore depth 12 example, I've run into the following exception:

Traceback (most recent call last):
  File "test_cotengra.py", line 40, in <module>
    info = tn.contract(all, optimize=opt, get='path-info', output_inds=[])
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/quimb/tensor/tensor_core.py", line 3141, in contract
    return tensor_contract(*self, **opts)
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/quimb/tensor/tensor_core.py", line 315, in tensor_contract
    path_info = get_contraction(eq, *ops, path=True, **contract_opts)
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/quimb/tensor/tensor_core.py", line 110, in get_contraction
    return fn(eq, *shapes, **kwargs)
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/quimb/tensor/tensor_core.py", line 78, in _get_contract_path
    return oe.contract_path(eq, *shapes, shapes=True, **kwargs)[1]
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/opt_einsum/contract.py", line 259, in contract_path
    path = path_type(input_sets, output_set, dimension_dict, memory_arg)
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/cotengra/hyper.py", line 347, in __call__
    for trial in trials:
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__
    for obj in iterable:
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/cotengra/hyper.py", line 312, in _gen_results_parallel
    yield self.get_and_report_next_future()
  File "/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/cotengra/hyper.py", line 296, in get_and_report_next_future
    trial = future.result()
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/concurrent/futures/_base.py", line 425, in result
    return self.__get_result()
  File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
    raise self._exception
ImportError: dlopen(/Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/kahypar.cpython-37m-darwin.so, 2): Symbol not found: __PyThreadState_Current
  Referenced from: /Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/kahypar.cpython-37m-darwin.so
  Expected in: flat namespace
 in /Users/david/Develop/cotengra/tensor/lib/python3.7/site-packages/kahypar.cpython-37m-darwin.so

Helpful info:
OS: macOS Catalina
$PYTHONPATH: /Users/david/Library/Python/3.7/lib/python

I've tried changing my environment variables and editing the CMake of kahypar such that
CC=clang, CXX=clang++ and, as a sanity check, to CC=gcc, CXX=g++ -- taking care to uninstall and reinstall the kahypar python bindings with pip each time.

Any insight would be much appreciated.

optimize='hyper-spinglass'

In the following code:
eq, shapes = oe.helpers.rand_equation(100, 3,d_min=2,d_max=2,seed=1) path, info = oe.contract_path(eq, *shapes, shapes=True, optimize='hyper-spinglass')
I encountered "KeyError: "Path optimizer 'auto-hq' not found, valid options are {'random-greedy', 'quickbb-2', 'flowcutter-60', 'branch-all', 'hyper-greedy', 'hyper-256', 'quickbb-10', 'flowcutter-10', 'hyper-betweenness', 'flowcutter-2', 'eager', 'greedy', 'auto', 'branch-1', 'dynamic-programming', 'quickbb-60', 'hyper', 'hyper-kahypar', 'branch-2', 'hyper-spinglass', 'dp', 'opportunistic', 'optimal'}.""

It seems that somehow in the core.py: ContractionTree.contract(), the optimize is still the default value 'auto-hq', instead of spinglass.

`test_resistance_centrality ` fails on aarch64

test_resistance_centrality fails on aarch64 with:

[   91s] =================================== FAILURES ===================================
[   91s] __________________________ test_resistance_centrality __________________________
[   91s] [gw2] linux -- Python 3.9.18 /usr/bin/python3.9
[   91s] 
[   91s]     def test_resistance_centrality():
[   91s]         inputs, output, _, size_dict = ctg.utils.lattice_equation([3, 3])
[   91s]         hg = ctg.HyperGraph(inputs, output, size_dict)
[   91s]         cents = hg.resistance_centrality()
[   91s] >       assert cents[0] == 0.0
[   91s] E       assert 6.178632484870434e-16 == 0.0
[   91s] 
[   91s] tests/test_hypergraph.py:17: AssertionError

Best practice on running cotengra via the opt_einsum API

In dgasmith/opt_einsum#217 (comment), @jcmgray stated that running cotengra optimization via oe.contract_path(expression, *operands, optimize=opt) (where opt is a cotengra optimizer) is slower than doing it via quimb. I have to add more detail that the path finding part of the opt_einsum method alone is much slower than the entirety of the run via quimb. As such, the reasoning in that comment applies only to the contraction phase.

What is the recommended way to do the path finding via opt_einsum, that is performant? The main use case is that most circuits are written in Qiskit/Cirq, and with cuQuantum's CircuitToEinsum, it enables one to do contraction of any Qiskit/Cirq circuits.

Nodes in ContractionTree with same indices sets

Hi, there. Very great project to play with!
I have a question on the construction of ContractionTree object.
In building such a tree structure, it seems to me that the node is totally determined by the frozen set of its edge indices.
I am wondering whether tensors with the same indices can be well handled in the setups, or at least they should be automatically contracted first?
Say trace of matrix square

t = ctg.core.ContractionTree.from_path([{0,1}, {1,0}], [], {0:2, 1:2}, ssa_path=[(0,1)])

I am afraid such a contractiontree may have problem since it cannot actually differentiate nodes.

t.children
# {frozenset({frozenset({0, 1})}): (frozenset({frozenset({0, 1})}), frozenset({frozenset({0, 1})}))}
t.total_flops()
# 0

Note how children and parent nodes are somehow messed with the same label and how total flops is zero which is not true for the trace operation.

I guess such scenario is a corner case, and such nodes should be contracted before contructing the tree? Though it would be great such automatic contraction and correct flop counting are also taken care of by tree construction in the program.

BadTrial issue during ContractionTree search in Julia interface

I am using cotengra by calling Python from Julia using PyCall. I have created an apposite interface to make it work, however with newer cotengra updates it starts giving me problems related to parallelism when calling opt.search(inputs, output, size_dict).
In particular, I get a segmentation fault which originates from the fact that, when concurrent.futures opens new processes to look for the contraction tree, a series of julia processes, not python, is spawned, and, interestingly enough, they are not closed after execution. An easy way around this would of course be to deactivate parallelism fully in the heuristic phase of the algorithm, but I want to exploit cotengra for very huge contraction tasks, and therefore it would be nice to continue using parallelism.
Another way would be to manually kill all spawned julia processes after the search is done, but I don't know how that could work unless I kill the processes individually, which apparently does not work either. Do you have a workaround in mind? Maybe changing the parallel backend could help?

Following, the full error log:

signal (11): Segmentation fault
in expression starting at <my_code>
unknown function (ip: 0x7f0aae95b06f)
unknown function (ip: 0x7f0aae9419b5)
unknown function (ip: 0x7f0aae91fa9d)
unknown function (ip: 0x7f0aae980d20)
cfunction_call at /usr/local/src/conda/python-3.9.13/Objects/methodobject.c:543
_PyObject_MakeTpCall at /usr/local/src/conda/python-3.9.13/Objects/call.c:191
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:116 [inlined]
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:103 [inlined]
PyObject_Vectorcall at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:127 [inlined]
call_function at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5077 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3489
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
_PyEval_EvalCode at /usr/local/src/conda/python-3.9.13/Python/ceval.c:4329
_PyFunction_Vectorcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:396
PyVectorcall_Call at /usr/local/src/conda/python-3.9.13/Objects/call.c:243
do_call_core at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5125 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3582
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
_PyEval_EvalCode at /usr/local/src/conda/python-3.9.13/Python/ceval.c:4329
_PyFunction_Vectorcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:396 [inlined]
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
method_vectorcall at /usr/local/src/conda/python-3.9.13/Objects/classobject.c:53
PyVectorcall_Call at /usr/local/src/conda/python-3.9.13/Objects/call.c:243
do_call_core at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5125 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3582
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
_PyEval_EvalCode at /usr/local/src/conda/python-3.9.13/Python/ceval.c:4329
_PyFunction_Vectorcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:396 [inlined]
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
method_vectorcall at /usr/local/src/conda/python-3.9.13/Objects/classobject.c:53
PyVectorcall_Call at /usr/local/src/conda/python-3.9.13/Objects/call.c:243
do_call_core at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5125 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3582
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
_PyEval_EvalCode at /usr/local/src/conda/python-3.9.13/Python/ceval.c:4329
_PyFunction_Vectorcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:396
PyVectorcall_Call at /usr/local/src/conda/python-3.9.13/Objects/call.c:243
do_call_core at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5125 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3582
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
_PyEval_EvalCode at /usr/local/src/conda/python-3.9.13/Python/ceval.c:4329
_PyFunction_Vectorcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:396
_PyObject_FastCallDictTstate at /usr/local/src/conda/python-3.9.13/Objects/call.c:129
_PyObject_Call_Prepend at /usr/local/src/conda/python-3.9.13/Objects/call.c:489
slot_tp_call at /usr/local/src/conda/python-3.9.13/Objects/typeobject.c:6731
_PyObject_Call at /usr/local/src/conda/python-3.9.13/Objects/call.c:281
do_call_core at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5125 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3582
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
_PyEval_EvalCode at /usr/local/src/conda/python-3.9.13/Python/ceval.c:4329
_PyFunction_Vectorcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:396
_PyObject_FastCallDictTstate at /usr/local/src/conda/python-3.9.13/Objects/call.c:129
_PyObject_Call_Prepend at /usr/local/src/conda/python-3.9.13/Objects/call.c:489
slot_tp_call at /usr/local/src/conda/python-3.9.13/Objects/typeobject.c:6731
_PyObject_Call at /usr/local/src/conda/python-3.9.13/Objects/call.c:281
do_call_core at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5125 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3582
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
function_code_fastcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:330
do_call_core at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5125 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3582
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
function_code_fastcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:330
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
PyObject_Vectorcall at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:127 [inlined]
call_function at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5077 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3506
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
_PyEval_EvalCode at /usr/local/src/conda/python-3.9.13/Python/ceval.c:4329
_PyFunction_Vectorcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:396 [inlined]
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
method_vectorcall at /usr/local/src/conda/python-3.9.13/Objects/classobject.c:53
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
PyObject_Vectorcall at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:127 [inlined]
call_function at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5077 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3537
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
function_code_fastcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:330
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
PyObject_Vectorcall at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:127 [inlined]
call_function at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5077 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3506
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
function_code_fastcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:330
_PyFunction_Vectorcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:367 [inlined]
_PyObject_FastCallDictTstate at /usr/local/src/conda/python-3.9.13/Objects/call.c:118
_PyObject_Call_Prepend at /usr/local/src/conda/python-3.9.13/Objects/call.c:489
slot_tp_init at /usr/local/src/conda/python-3.9.13/Objects/typeobject.c:6971
type_call at /usr/local/src/conda/python-3.9.13/Objects/typeobject.c:1028 [inlined]
_PyObject_MakeTpCall at /usr/local/src/conda/python-3.9.13/Objects/call.c:191
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:116 [inlined]
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:103 [inlined]
PyObject_Vectorcall at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:127 [inlined]
call_function at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5077 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3520
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
function_code_fastcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:330
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
PyObject_Vectorcall at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:127 [inlined]
call_function at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5077 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3489
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
function_code_fastcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:330
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
PyObject_Vectorcall at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:127 [inlined]
call_function at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5077 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3506
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
function_code_fastcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:330
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
PyObject_Vectorcall at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:127 [inlined]
call_function at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5077 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3506
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
function_code_fastcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:330
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
PyObject_Vectorcall at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:127 [inlined]
call_function at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5077 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3506
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
function_code_fastcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:330
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
PyObject_Vectorcall at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:127 [inlined]
call_function at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5077 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3506
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
_PyEval_EvalCode at /usr/local/src/conda/python-3.9.13/Python/ceval.c:4329
_PyFunction_Vectorcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:396 [inlined]
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
method_vectorcall at /usr/local/src/conda/python-3.9.13/Objects/classobject.c:53
PyVectorcall_Call at /usr/local/src/conda/python-3.9.13/Objects/call.c:243
do_call_core at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5125 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3582
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
_PyEval_EvalCode at /usr/local/src/conda/python-3.9.13/Python/ceval.c:4329
_PyFunction_Vectorcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:396
PyVectorcall_Call at /usr/local/src/conda/python-3.9.13/Objects/call.c:243
do_call_core at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5125 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3582
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
gen_send_ex at /usr/local/src/conda/python-3.9.13/Objects/genobject.c:215 [inlined]
gen_iternext at /usr/local/src/conda/python-3.9.13/Objects/genobject.c:549
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3308
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
_PyEval_EvalCode at /usr/local/src/conda/python-3.9.13/Python/ceval.c:4329
_PyFunction_Vectorcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:396
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
PyObject_Vectorcall at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:127 [inlined]
call_function at /usr/local/src/conda/python-3.9.13/Python/ceval.c:5077 [inlined]
_PyEval_EvalFrameDefault at /usr/local/src/conda/python-3.9.13/Python/ceval.c:3506
_PyEval_EvalFrame at /usr/local/src/conda/python-3.9.13/Include/internal/pycore_ceval.h:40 [inlined]
function_code_fastcall at /usr/local/src/conda/python-3.9.13/Objects/call.c:330
_PyObject_VectorcallTstate at /usr/local/src/conda/python-3.9.13/Include/cpython/abstract.h:118 [inlined]
method_vectorcall at /usr/local/src/conda/python-3.9.13/Objects/classobject.c:83
macro expansion at /home/ubuntu/.julia/packages/PyCall/ilqDX/src/exception.jl:108 [inlined]
#107 at /home/ubuntu/.julia/packages/PyCall/ilqDX/src/pyfncall.jl:43 [inlined]
disable_sigint at ./c.jl:458 [inlined]
__pycall! at /home/ubuntu/.julia/packages/PyCall/ilqDX/src/pyfncall.jl:42 [inlined]
_pycall! at /home/ubuntu/.julia/packages/PyCall/ilqDX/src/pyfncall.jl:29
_pycall! at /home/ubuntu/.julia/packages/PyCall/ilqDX/src/pyfncall.jl:11
unknown function (ip: 0x7f0b1775c773)
jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2247 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2429
#
#114 at /home/ubuntu/.julia/packages/PyCall/ilqDX/src/pyfncall.jl:86
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2247 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2429
jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1788 [inlined]
do_apply at /buildworker/worker/package_linux64/build/src/builtins.c:713
PyObject at /home/ubuntu/.julia/packages/PyCall/ilqDX/src/pyfncall.jl:86
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2247 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2429
ContractionTree at <my_code>
unknown function (ip: 0x7f0b1775a9e4)
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2247 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2429
ctg_heuristics at <my_code>
ctg_contractor at <my_code>
##core#276 at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:489
##sample#277 at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:495
unknown function (ip: 0x7f0b177419e1)
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2247 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2429
#_run#48 at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:99
_run##kw at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:93
unknown function (ip: 0x7f0b177300e8)
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2247 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2429
jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1788 [inlined]
jl_f__call_latest at /buildworker/worker/package_linux64/build/src/builtins.c:757
#invokelatest#2 at ./essentials.jl:718 [inlined]
invokelatest##kw at ./essentials.jl:714 [inlined]
#run_result#45 at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:34 [inlined]
run_result##kw at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:34 [inlined]
#run#49 at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:117
run##kw at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:117 [inlined]
run##kw at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:117 [inlined]
#warmup#54 at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:169 [inlined]
warmup##kw at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:169 [inlined]
#tune!#58 at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:250
tune! at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:250 [inlined]
tune! at /home/ubuntu/.julia/packages/BenchmarkTools/0owsb/src/execution.jl:250
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2247 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2429
jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1788 [inlined]
do_call at /buildworker/worker/package_linux64/build/src/interpreter.c:126
eval_value at /buildworker/worker/package_linux64/build/src/interpreter.c:215
eval_stmt_value at /buildworker/worker/package_linux64/build/src/interpreter.c:166 [inlined]
eval_body at /buildworker/worker/package_linux64/build/src/interpreter.c:587
jl_interpret_toplevel_thunk at /buildworker/worker/package_linux64/build/src/interpreter.c:731
jl_toplevel_eval_flex at /buildworker/worker/package_linux64/build/src/toplevel.c:885
jl_toplevel_eval_flex at /buildworker/worker/package_linux64/build/src/toplevel.c:830
jl_toplevel_eval_in at /buildworker/worker/package_linux64/build/src/toplevel.c:944
eval at ./boot.jl:373 [inlined]
include_string at ./loading.jl:1196
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2247 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2429
_include at ./loading.jl:1253
include at ./Base.jl:418
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2247 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2429
exec_options at ./client.jl:292
_start at ./client.jl:495
jfptr__start_40531.clone_1 at /home/ubuntu/Andrea/julia-1.7.1/lib/julia/sys.so (unknown line)
_jl_invoke at /buildworker/worker/package_linux64/build/src/gf.c:2247 [inlined]
jl_apply_generic at /buildworker/worker/package_linux64/build/src/gf.c:2429
jl_apply at /buildworker/worker/package_linux64/build/src/julia.h:1788 [inlined]
true_main at /buildworker/worker/package_linux64/build/src/jlapi.c:559
jl_repl_entrypoint at /buildworker/worker/package_linux64/build/src/jlapi.c:701
main at julia (unknown line)
__libc_start_main at /lib/x86_64-linux-gnu/libc.so.6 (unknown line)
unknown function (ip: 0x400808)
Allocations: 27838854 (Pool: 27829714; Big: 9140); GC: 31
ERROR: LoadError: PyError ($(Expr(:escape, :(ccall(#= /home/ubuntu/.julia/packages/PyCall/ilqDX/src/pyfncall.jl:43 =# @pysym(:PyObject_Call), PyPtr, (PyPtr, PyPtr, PyPtr), o, pyargsptr, kw))))) <class 'concurrent.futures.process.BrokenProcessPool'>
BrokenProcessPool('A child process terminated abruptly, the process pool is not usable anymore')
File "/home/ubuntu/anaconda3/lib/python3.9/site-packages/cotengra/hyperoptimizers/hyper.py", line 652, in search
self._search(
File "/home/ubuntu/anaconda3/lib/python3.9/site-packages/cotengra/hyperoptimizers/hyper.py", line 623, in _search
for trial in trials:
File "/home/ubuntu/anaconda3/lib/python3.9/site-packages/cotengra/hyperoptimizers/hyper.py", line 552, in _gen_results_parallel
future = submit(
File "/home/ubuntu/anaconda3/lib/python3.9/site-packages/cotengra/parallel.py", line 156, in submit
return pool.submit(fn, *args, **kwargs)
File "/home/ubuntu/anaconda3/lib/python3.9/concurrent/futures/process.py", line 707, in submit
raise BrokenProcessPool(self._broken)

And also an example of the julia spawned processes from top:

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
#main task
153395 - 20 0 3871684 810540 142864 R 47.7 0.2 11:14.46 julia
#other processes
154141 - 20 0 0 0 0 Z 0.0 0.0 0:04.53 julia
154142 - 20 0 0 0 0 Z 0.0 0.0 0:04.46 julia
154143 - 20 0 0 0 0 Z 0.0 0.0 0:04.50 julia
154144 - 20 0 0 0 0 Z 0.0 0.0 0:04.49 julia
154145 - 20 0 0 0 0 Z 0.0 0.0 0:04.48 julia
154146 - 20 0 0 0 0 Z 0.0 0.0 0:04.49 julia
154147 - 20 0 0 0 0 Z 0.0 0.0 0:04.49 julia
154149 - 20 0 0 0 0 Z 0.0 0.0 0:04.48 julia
154150 - 20 0 0 0 0 Z 0.0 0.0 0:04.48 julia
154151 - 20 0 0 0 0 Z 0.0 0.0 0:04.46 julia
154152 - 20 0 0 0 0 Z 0.0 0.0 0:04.48 julia
154153 - 20 0 0 0 0 Z 0.0 0.0 0:04.46 julia
154154 - 20 0 0 0 0 Z 0.0 0.0 0:04.46 julia
154155 - 20 0 0 0 0 Z 0.0 0.0 0:04.09 julia
154175 - 20 0 0 0 0 Z 0.0 0.0 0:04.36 julia
154176 - 20 0 0 0 0 Z 0.0 0.0 0:04.48 julia
154177 - 20 0 0 0 0 Z 0.0 0.0 0:04.47 julia
154178 - 20 0 0 0 0 Z 0.0 0.0 0:04.37 julia

Create a tag or a release

Hi!

I am Moise Rousseau, a HPC analyst at Calcul Quebec. I would like to deploy a pre-compiled wheel of Cotengra optimized for our systems for some of our users. However, our policy requires a official release on PyPi / GitHub or at least, a tag on GitHub.

Can you create such a tag or release ? You can do so with the link "Create a new release" on the right of the repository page.

Regards,
Moise

Hypergraph plot fails after contraction

Hi! :)

While playing around with the Hypergraph class, I found an issue (probably) with the Hypergraph.contract() method. In particular, after contracting two nodes, the Hypergraph.plot() function fails.

Here is an example:

import cotengra as ctg

inputs = [
    ('a', 'b', 'x'),
    ('b', 'c', 'd'),
    ('c', 'e', 'y'),
    ('e', 'a', 'd'),
]
output = ('x', 'y')
size_dict = {'x': 2, 'y': 3, 'a': 4, 'b': 5, 'c': 6, 'd': 7, 'e': 8}

hg = ctg.HyperGraph(inputs, output, size_dict)
hg.contract(0, 1) # comment this line to make the code work
hg.plot()

I would expect this code to plot the updated hypergraph with contracted nodes {0, 1} into the new node 4. However, code fails with:

Traceback (most recent call last):
  File "/home/nate/test/test.py", line 14, in <module>
    hg.plot()
  File "/home/nate/qvm/.venv/lib/python3.11/site-packages/cotengra/plot.py", line 15, in wrapped
    fig, ax = fn(*args, **kwargs)
              ^^^^^^^^^^^^^^^^^^^
  File "/home/nate/qvm/.venv/lib/python3.11/site-packages/cotengra/plot.py", line 55, in new_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/nate/qvm/.venv/lib/python3.11/site-packages/cotengra/plot.py", line 1290, in plot_hypergraph
    hypergraph_compute_plot_info_G(
  File "/home/nate/qvm/.venv/lib/python3.11/site-packages/cotengra/plot.py", line 455, in hypergraph_compute_plot_info_G
    color = _node_colorer(nd)
            ^^^^^^^^^^^^^^^^^
  File "/home/nate/qvm/.venv/lib/python3.11/site-packages/cotengra/plot.py", line 432, in _node_colorer
    return node_colors[nd]
           ~~~~~~~~~~~^^^^
IndexError: list index out of range

Is this expected behavior? If not, there is probably some missing "bookkeeping" in the contract() method to ensure indices are correct.

Thank you!
Nate

(I'm on cotengra version 0.6.2)

On your work arXiv:2206.07044

Hyper-optimized compressed contraction of tensor networks with arbitrary geometry: great work and it is very impressive and helpful!

I am just curious whether cotengra now supports some of the features described in this work, since I find some compressed words in commit history and the file name. Besides, do you have any plan to open source the implementation of this paper as a great tool (either integrated with cotengra or as an independent package) in the future?

Also a technical side question, how do you compare this general approximation contraction scheme with MPS-TEBD scheme (eg. https://journals.aps.org/prx/abstract/10.1103/PhysRevX.10.041038) in the context of approximated quantum circuit simulation (say still the random circuit supremacy case) in terms of efficiency and approximation error.

Sorry if you find this post irrelevant from issues here, feel free to close this, and we can communicate via email if you prefer.

Mac Instillation: Contengra and Kahypar

Hi!

We could not install Kahypar on Mac. We opened an issue in Kahypar repo through the link:
kahypar/kahypar#103

We are wondering if the problem might be something related to Cotengra or not.
Would you please take a look at the issue and let us know if you can help us to figure it out?
Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.