Coder Social home page Coder Social logo

aromanro / qcsim Goto Github PK

View Code? Open in Web Editor NEW
30.0 3.0 5.0 792 KB

Quantum computing simulator

License: GNU General Public License v3.0

C++ 99.16% CMake 0.84%
computational-physics physics physics-simulation quantum quantum-computing quantum-algorithms quantum-computing-algorithms quantum-information simulation simulator

qcsim's Introduction

qcsim's People

Contributors

aromanro avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

qcsim's Issues

Use cuda to improve speed for registers with many qubits

The project is built with omp turned on and when it used the big matrices obtained by tensor product it benefited from it. Currently those big matrices are avoided, the tensor product matrix and its product with the state vector are optimized by taking advantage of the structure of the matrix.
The code for applying a quantum gate now does a single loop over the basis states. This could be parallelized using open mp, for a big register. I tried a quick hack but unfortunately for small registers there is a noticeable slowing down due of the omp overhead. I might try to avoid it in the future, for now the omp pragmas stay commented out there.

Nevertheless, for big registers cuda could be used (I would favor something that works on some other hardware, to not be tied to nvidia, such open cl or even compute shaders, but cuda is more performant and when performance is needed...), that could improve the speed quite a bit.

Add implementation with sparse matrices

As the matrices increase exponentially with the number of qubits an implementation using sparse matrices would be beneficial.
Unfortunately, for obvious reasons one cannot access matrix elements as easily as in normal matrices type. The Eigen implementation is quite different, so for now I postpone this.

Use AVX2 to speed up?

It might be worth investigate it.
Currently it benefits from AVX2 because eigen has avx2 optimizations (so on my computer execution time for tests/examples drops from > 50 sec to aprox 35 sec with avx2 turned on), but maybe some stuff from QubitRegisterCalculator could also be improved.

qiksit aer has avx2 optimizations, so it might be possible.

Compilation errors

I tried running the example code found in the documentation:

#include <iostream>

#include "QCSim/QubitRegister.h"

int main()
{
    QC::QubitRegister reg(3);
    QC::Gates::PauliXGate x;
    reg.ApplyGate(x, 0);
    unsigned int m = reg.Measure();

    std::cout << "Result should be 1, it's: " << m << std::endl;

    return 0;
}

And I keep getting a bunch of similar errors:

In file included from ./QCSim/QubitRegister.h:3:
In file included from ./QCSim/QubitRegisterCalculator.h:23:
./QCSim/QuantumGate.h:177:20: error: missing 'typename' prior to dependent type name 'BaseClass::BaseClass'
                        using OpClass = BaseClass::BaseClass;
                                        ^~~~~~~~~~~~~~~~~~~~
                                        typename 
./QCSim/QuantumGate.h:198:20: error: missing 'typename' prior to dependent type name 'BaseClass::BaseClass'
                        using OpClass = BaseClass::BaseClass;
                                        ^~~~~~~~~~~~~~~~~~~~
                                        typename 
./QCSim/QuantumGate.h:217:20: error: missing 'typename' prior to dependent type name 'BaseClass::BaseClass'
                        using OpClass = BaseClass::BaseClass;
                                        ^~~~~~~~~~~~~~~~~~~~
                                        typename 

Could you please provide an example on how to compile the example code and what the requirements are regarding the C++ standard and compilation options?

Improve the simulation for 'subalgorithms'

There are situations when a 'subalgorithm' is executed repeatedly (an example: Suzuki-Trotter expansion in quantum simulations).
In such case obtaining the operators for gates and multiplying them with the register is very slow.

Instead, the whole operator matrix for a step could be obtained once then applied multiple times on the register.

Optimization for one/two/three qubit(s) gates

Instead of generating big matrices (tensor products) out of the gates matrices, to multiply the register contents, there are some simple optimizations that can be done, for example for one qubit gates the register can be updated in O(N) time with a sum of two products only.

Add 'uncomputation'

Can be useful for various quantum algorithms.
Just add a pair ComputeStart/ComputeEnd (record all gates applied in between, including those supplied as matrices) and Uncompute to the quantum register (and forwarded calls to the register into the quantum algorithm class).
https://en.wikipedia.org/wiki/Uncomputation

Add VQE

Variational Quantum Eigensolver.

Add a 'subalgorithm' that decomposes a controlled-controlled three-qubits gate in two qubits gates

There are cases when one wants to avoid using three-qubit gates, for example for distributed quantum computing situations or even for testing for bugs in the three qubits implementations (I actually had such bug recently, in the optimization code).
It can be done using Sleator-Weinfurter (info here: https://quantumcomputing.stackexchange.com/questions/3943/how-do-you-implement-the-toffoli-gate-using-only-single-qubit-and-cnot-gates), using 5 two-qubit gates, two cnots, three controlled-V and one controlled-V^dag, where V*V^dag = U, U being from the original controller-controlled-U gate.

Execute improvement - avoid repeating the execution of the same circuit for many measurements

Many algorithms are probabilistic and need repeating the measurement many times, which requires reinitialization of the register and executing again the algorithm... but there is a shortcut in simulation (unlike the real case, where you cannot copy a quantum state: No Cloning Theorem): the register state can be saved and then after the measurement (with the collapse, of course) it can be initialized back to the saved value.

This would allow avoiding repeating the algorithm many times, doing instead only repeated measurements.

In fact, the register already has a RepeatedMeasure implementation, this enhancement would just need a RepeatedExecute, similar with the Execute function, but with a RepeatedMeasure at the end instead of the single Measure.

Not a big deal to implement, but I'll postpone it for a while, there are more interesting things to do :)

Add stochastic gradient descent

Looks like the project might end up with some other things that might benefit from stochastic gradient descent methods, instead of using the regular gradient descent.

I already have some implemented in the machine learning project https://github.com/aromanro/MachineLearning and in the dft notebook in https://github.com/aromanro/PythonCompphys, although the former is machine learning - specific and the later is python, so here I'll have to implement them again. Not a big deal, though, I might do that sometime.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.