Coder Social home page Coder Social logo

pennylaneai / pennylane Goto Github PK

View Code? Open in Web Editor NEW
2.1K 45.0 538.0 86.49 MB

PennyLane is a cross-platform Python library for differentiable programming of quantum computers. Train a quantum computer the same way as a neural network.

Home Page: https://pennylane.ai

License: Apache License 2.0

Python 99.90% Makefile 0.02% Shell 0.03% Dockerfile 0.04%
quantum machine-learning deep-learning neural-network optimization quantum-computing quantum-machine-learning automatic-differentiation tensorflow pytorch

pennylane's Introduction

PennyLane is a cross-platform Python library for differentiable programming of quantum computers.

Train a quantum computer the same way as a neural network.

Key Features

  • Machine learning on quantum hardware. Connect to quantum hardware using PyTorch, TensorFlow, JAX, Keras, or NumPy. Build rich and flexible hybrid quantum-classical models.

  • Just in time compilation. Experimental support for just-in-time compilation. Compile your entire hybrid workflow, with support for advanced features such as adaptive circuits, real-time measurement feedback, and unbounded loops. See Catalyst for more details.

  • Device-independent. Run the same quantum circuit on different quantum backends. Install plugins to access even more devices, including Strawberry Fields, Amazon Braket, IBM Q, Google Cirq, Rigetti Forest, Qulacs, Pasqal, Honeywell, and more.

  • Follow the gradient. Hardware-friendly automatic differentiation of quantum circuits.

  • Batteries included. Built-in tools for quantum machine learning, optimization, and quantum chemistry. Rapidly prototype using built-in quantum simulators with backpropagation support.

Installation

PennyLane requires Python version 3.9 and above. Installation of PennyLane, as well as all dependencies, can be done using pip:

python -m pip install pennylane

Docker support

Docker support exists for building using CPU and GPU (Nvidia CUDA 11.1+) images. See a more detailed description here.

Getting started

For an introduction to quantum machine learning, guides and resources are available on PennyLane's quantum machine learning hub:

You can also check out our documentation for quickstart guides to using PennyLane, and detailed developer guides on how to write your own PennyLane-compatible quantum device.

Tutorials and demonstrations

Take a deeper dive into quantum machine learning by exploring cutting-edge algorithms on our demonstrations page.

All demonstrations are fully executable, and can be downloaded as Jupyter notebooks and Python scripts.

If you would like to contribute your own demo, see our demo submission guide.

Videos

Seeing is believing! Check out our videos to learn about PennyLane, quantum computing concepts, and more.

Contributing to PennyLane

We welcome contributions—simply fork the PennyLane repository, and then make a pull request containing your contribution. All contributors to PennyLane will be listed as authors on the releases. All users who contribute significantly to the code (new plugins, new functionality, etc.) will be listed on the PennyLane arXiv paper.

We also encourage bug reports, suggestions for new features and enhancements, and even links to cool projects or applications built on PennyLane.

See our contributions page and our developer hub for more details.

Support

If you are having issues, please let us know by posting the issue on our GitHub issue tracker.

We also have a PennyLane discussion forum—come join the community and chat with the PennyLane team.

Note that we are committed to providing a friendly, safe, and welcoming environment for all. Please read and respect the Code of Conduct.

Authors

PennyLane is the work of many contributors.

If you are doing research using PennyLane, please cite our paper:

Ville Bergholm et al. PennyLane: Automatic differentiation of hybrid quantum-classical computations. 2018. arXiv:1811.04968

License

PennyLane is free and open source, released under the Apache License, Version 2.0.

pennylane's People

Contributors

agran2018 avatar albertmitjans avatar albi3ro avatar ankit27kh avatar antalszava avatar anthayes92 avatar astralcai avatar cgogolin avatar co9olguy avatar dime10 avatar dwierichs avatar eddddddy avatar github-actions[bot] avatar glassnotes avatar jaybsoni avatar johannesjmeyer avatar josh146 avatar ketpuntog avatar lillian542 avatar mariaschuld avatar mudit2812 avatar obliviateandsurrender avatar qottmann avatar quantshah avatar rmoyard avatar smite avatar soranjh avatar timmysilv avatar trbromley avatar vincentmr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pennylane's Issues

Importing numpy

Should numpy be consistently imported from openqml in all files and plugins? I understand that this version of numpy comes from autograd to enable differentiation. Would it be a problem if a plugin imported a plain vanilla local numpy version via import numpy as np and that version differs from the one provided by openqml? Probably yes? If so, this should be documented.

Gradients vs Jacobians of higher-dimensional functions

I just pushed a commit (af86912) which updates the vector-Jacobian product function so that it properly carries out this product in the case where we have higher-dimensional functions. Please correct me if I'm incorrect, but because the previous code was using the __mul__ operator instead of __matmul__, it was broadcasting the gradient across the jacobian rather than taking the product. This led to all sorts of headaches with autograd, which should hopefully now be fixed.

While trying to figure out this bug, I realized that autograd is not currently working correctly with QNode.gradient. The reason for this is that QNode.gradient returns either a gradient (scalar or vector) or a jacobian (matrix), depending on the context. This seems to throw off autograd.jacobian (I believe, hard to parse autograd source code), which makes use of QNode.gradient internally.

We will have to do some more work to make sure both these derivative operators are working as intended.

CV gradients - Projector decomposition of generators

@cgogolin and @smite, great that you implemented Christian's idea with the projector decomposition. Is there any gate for which we can find a useful decomposition, so this becomes relevant for the code? In fact, either a decomposition of the generator into implementable projectors, or a decomposition of the generator into implementable unitaries (from there we could use some tricks I think). I tried that extensively but could never make it work...

Demos and examples

More demos should be added (in the examples directory). They should be self-contained Python scripts that use OpenQML to do something interesting and educational. The demos should have lots of comments in them, ideally explaining both what we are doing, and why we are doing it.

Ideas:

  • Training a circuit to produce/approximate a state with given properties
  • Training a quantum classifier circuit

Bugs for CV gradients

I mentioned this to @smite already, but putting here to remember.

Currently some checks are performed to determine which differentiation method to use. One of these checks is to look at the number of parameters a gate has. If it does not have exactly 1 parameter, the differentiation method is set to be finite differences.
We would like to have the two-parameter CV gates compatible with automatic differentiation, so this check needs to be fixed.

For all two-parameter CV gates that we use, the second parameter is always a phase angle, so these gates can always be decomposed in terms of a one-parameter gate + a rotation gate. This makes the automatic differentiation formula workable (though we need to be careful with the order of gates)

Arguments of Device.__init__()

Since recently Device __init__() requires wires and shots as positional arguments. I would argue that keyword arguments are better suited for such things as they will grant us greater flexibility with changes in Device without breaking our own and third-party plugins (the recent changes broke the ProjectQ plugin, because it was passing shots as a kwarg).

Related to that: Why does Device need wires at all? Most concrete devices will probably need it, but I don't think the base class needs it, does it?

Verifying the CV circuit differentiation formulas

I'd like some help with checking that the derivation of my CV differentiation formulas is correct. They produce gradients that match with the finite diff gradients, but it's better to be sure.

The formulas can be found in Appendix A5: "CV gates" in formalism/openqml_formalism.tex in the master branch. They are an extension of Table II.

I think the BS matrix in Table II has to be wrong, since some of the matrix elements are complex.

Note that for two-mode gates (BS) I'm using the operator order (I, x1, p1, x2, p2) in the matrices.

Not everyone needs to do this, so if you volunteer just assign yourself.

CV gradients - Heisenberg derivatives

Hey everyone. Maybe we can move some of the discussions out of the paper and to here for now? At least as far as writing equations allows...

Christian, your comment on passing the derivative past a non-Gaussian circuit in the Heisenberg and "circuit" picture is very useful. I have not even thought about the case of U being non-Gaussian and something else than homodyne detection, I was more worried about and \hat{O} = \hat{x} and V being non-Gaussian. Would be interested to know if the same thing appears.

This is a really cool lesson in thorough Heisenberg calculus!

Make cost() not take "batched" as an argument

I'm not sure whose level of influence this is (Ville? Josh?) but at the new_user_interface branch cost() requires the second argument "batched". Can we remove that easily?

This is related to the optimizer question, so we could also rewrite it in one go.

OpenQML workflow

Hi all, because of the nature of this project and our distributed team, it's become clear that we need to establish some more firm guidelines and best practices to make sure things move forward smoothly.

The number one priority is to keep everything moving in sync as much as possible. This means that everyone should, as far as possible, work on orthogonal tasks. Discussion/suggestions are ok, but please don't spend your time working on something that others have been assigned. In particular, do not write and commit new code that falls under someone else's assignments. This takes your time away from your assigned tasks, and can delay others from their tasks.

For clarity, these are people's current assigned tasks:

  • Ville: implement autograd for CV gaussian circuits (if this is done, check in about what to do next)
  • Josh: design new API and refactor code to implement it
  • Maria: code up examples (e.g., VQE) and verify that they work; write documentation for these
  • Christian: Project Q plugin. After that, start creating fuller documentation and tests for the plugins, and work toward migrating these outside of the main library.
  • Nathan: design new API w/ Josh, work on introductory sections of docs, coordinate work

Please do not make new branches! I got back from vacation yesterday and there are now 13 branches. I don't think that anyone can argue that this is a good or efficient strategy. We don't have time for code review/merging of so many branches. Please make branches only for your assigned tasks. Please remove any branches that have already been merged and are no longer needed.

If you do want to discuss/suggest things, use slack (for quick questions) or the github issues page (for more detailed discussions). Also consider using the github projects page for things that require coordination between multiple people.

Some other tasks are waiting for the final API changes. This was @josh146 's task to implement last week. Everyone is free to comment (on a github issue) about any technical issues they can find. By creating several new branches with suggestions, everything that was waiting on the finalized API is now held up.

GradientDescentOptimizer import

The tutorials currently seem to have to import GradientDescentOptimizer as follows:

from openqml._optimize import GradientDescentOptimizer

This is rather ugly, no? Shouldn't it be possible to import from a "public" module without underscore, such as

from openqml.optimize import GradientDescentOptimizer

Fix Adam optimizer

This is to remind myself that the Adam optimizer computes funny gradients, which seems to be a conceptual bug (code passes test).

Allowing multidimensional weight arrays?

I am busy with larger supervised learning examples, and wanted to double check if we should allow for multidimensional weight arrays.

We could flatten and reshape the weights in the nuclear optimizers so that the user does not have to worry about it,

def step(self, objective_fn, x, grad_fn=None):
    """Update x with one step of the optimizer."""

    x_shape = x.shape
    x_flat = x.flatten()

    g = self.compute_grad(objective_fn, x, grad_fn=grad_fn)
    x_out_flat = self.apply_grad(g, x_flat)

    x_out = x_out_flat.reshape(x_shape)
    return x_out

grad(objective_fn, x) still takes the multidimensional x array to be consistent with how the user programmed the qfunc. But it always spits out a 1-d vector (right?). So I guess x gets flattened along the way.

Do you see an issue with this? Josh and I feel it's a bit like a hack...

Circuit templates for the plugins

The plugins need better, more relevant circuit templates. The current templates are more or less just arbitrary gate sequences to have something to run tests on.

Support expectations that return more than one expectation value

There are expectations that return more than one expectation value. On some platforms (like the IBM machine) supporting them is essential to unlock the full potential of the device. Only one measurement is possible there and if that is a single wire Z, then all the other Z expectation values, which could in principle be measured become inaccessible.

Currently qfuncs can return a single Expectation or a tupe of Expectations, but each Expectation needs to return a single float.

Ideally, OpenQML would support Expectations that themselves return array (or tuples?) of values.

From what I could see, this would require rather substantial changes in qnode.py. Ideally, I think, an expectation should be able to specify its return type and that should be used in qnode.output_type (which might have to become a tuple to support returning several Expectations with different return types from the same qfunc).

With the current code there is the following workaround: We could forget about AllPauliZ in the the ProjectQ plugin and instead, always measure all Z when PauliZ is requested, cache the results and then return them in case further PauliZs are measured. This will allow the user to do something like

return qm.expectation.PauliZ(wires=0), qm.expectation.PauliZ(wires=1), ...

instead of

return qm.expectation.AllPauliZ()

Clearer/consistent attribute names

While going through pull requests related to plugins, I had some trouble parsing the use of self.wires. For clarify to plugin developers, we will change this throughout devices/plugins to self.num_wires.

Some operations are missing gradient recipes

These include:

* Two-mode squeezing
* Controlled-addition
* Controlled-phase

these should be relatively easy to add. Added in #93

In addition, do we want to add gradient recipes to Gaussian state preparation? This kind of makes sense. Do we need to do anything to take into account that state preparations 'overwrite' previous operations on that wire?

(Also, does it make sense to have a gradient recipe for GaussianState? It's two parameters are both arrays).

Qubit gradient recipes we should double check:

* PhaseShift done by @co9olguy

Branch pruning

Hey everyone, Nathan and I have been going through the branches, to work out which to keep, which ones we can merge, which ones we can turn into issues, and which to delete. Here is the current list I have compiled:

• ❎ build_system
@cgogolin: Branches off projectQ_plugin. Simply adds a makefile. To delete.

• ❎ parameter_pass_on
@cgogolin: branches off projectQ_plugin. A proposal to check for whether a plugin requires credentials. small, likely be deleted and turned into an issue.

• 📥 new_intro
@co9olguy: branches off master. Can be merged back into master - essentially the latest version of master.

• ❎ projectQ_plugin
@cgogolin: branches off master. Based on Ville's old interface, recommend to delete, if the new interface version is feature complete, thoughts?

• ✔️ new_user_interface_with_projectQ_plugin
@cgogolin: branches off projectQ_plugin, with a merge from new_user_interface.
The current ProjectQ plugin branch, should be spun off into it's own repo, with it's own issue tracker.

• ❎ new_user_interface_suggested_minor_improvements
@cgogolin: branches off new_user_interface. I've already manually merged this into new_user_interface. Can be deleted.

• ❎ new_user_interface_pseudo_code_exampple_christian
@cgogolin: branches off new_user_interface, simply contains an additional user interface example. Can be deleted, potentially turned into an issue if needed.

• 📥 cv_gradients
@smite: branches off master, just prior to new_intro. Contains Ville's work on implementing CV gradients. Should be eventually merged/archived, and Ville should move his new code to the branch containing the new user interface.

• ❎ ui_update
@smite: branches off cv_gradients. Implements a modified user interface, by modifying Ville's original interface. Inspired by the new user interface (i.e. plugin becomes device, must define a quantum function with a return statement, explicitly create QNode, evaluate it via QNode.evaluate().). Can be moved to a github issue?

• ✔️ new_user_interface
@josh146: branches off new_intro, but essentially is a massive refactor, and cannot be easily merged back.

Does anyone have any thoughts/comments on the above list?

Measurement operations?

While finishing off the SF plugin, I noticed that we don't have any measurement operations. For instance, including these would allow for circuits with expectation values conditional on previous measurements:

def circuit(x, y):
    qm.TwoModeSqueezing(0.1, 0, wires=[0, 1])
    qm.Beamsplitter(np.pi/4, 0, wires=[0, 1])
    qm.PNR(wires=0)
    return qm.expectation.Fock(wires=1)

This would then open other possibilities, for instance we could pass post-select parameters as parameters, etc.

As a side note, does anyone have any thoughts about moving the qm.expectation observables to top-level, like the operations? On the one hand, it makes them slightly more accessible. On the other, you lose the visual cue of always having qm.expectation objects in the return statement.

Autograd seems to not compute correct gradient for SF plugin

If you look on the examples_maria branch at examples/continuout_variable/photon_redirection.py, qm.grad() does not compute the right gradient. You find the observation in the printout. Does anyone have an idea why, before I go kneedeep into debugging?

Operator_map and observable_map

As mentioned earlier operator_map should become a class property of the plugins, I think. Do we also want to have a map for the supported observables? If so, maybe they should be called gate_map and observable_map? If we go for that supported() should be changed accordingly and during execute we should then also be testing for whether the observables are supported.

Originally posted by @cgogolin in #28 (comment)

CV gradients of second-order operators

At the moment, automatic gradients are coded up for first-order quadrature operators (x, p, etc). We still need to implement automatic gradients for 2nd-order observables (xx, xp + px, pp, n, etc) in a smart way.

Putting this issue here as a reminder to do this

Suggestions for the plugin devloper side of the API

I will collect suggestions on the API for plugin developers here:

  • Currently execute() must be overwritten by plugins. In the overwriting function the result must be stored in self._out. This implementation detail should better be hidden from plugin developers, I think it would be better if the user function execute() were made "final" and plugins had to instead overwrite a function called execute_queued() that returns a value and which would then be called in execute() as follows
def execute(self):
    self._out = self.execute_queued(self)

Logging in the optimizer

The Optimizer should probably consistently use log.info() instead of print() for the status updates and debug output. This will already enabled the user (or a unit test author) to set the log level and suppress the messages, but maybe it would be nice to also offer an keyword argument to suppress log messages?

The inputs and outputs of the quantum function circuit

This issue is related to how we pass/call the quantum function associated with a QNode. As far as I can see, we want to provide maximum freedom to the user, but are constrained by autograd/design decisions/python decorators.

From @mariaschuld:

  1. The function where the user defines the quantum circuit should be defined in the same way as it is later used. This is at the moment clashing with how autograd wants things, and we don't fully know how autograd wants things because of its poor documentation,
  2. it should have two "slots" for data inputs and arguments, since the user cannot poke around in one
  3. it should be (at least very soon) able to return the expectations of multiple qubits/qmodes, otherwise things like the variational eigensolver or multi-class variational classifier are reduced to a simplicity that takes away the power of what we want to implement

Open questions:

  1. should circuit() take batches of data? For example when the device has some parallelisation functionality for each data point in a batch
  2. how do we envision the user to handle thousands of parameters, layered parameter structures and possibly non-vectorial data in future?

Documentation-friendly technical details

Currently we have some compiled literature and some internal notes on the subject of quantum gradient circuits. It's early days and this is also a research project, so things are still a bit unorganized.

Knowing that we are going to open source the software, let's begin to organize and clean these things up a bit. We will begin with establishing a document similar to the conventions page that we made while developing Strawberry Fields. Here we will refine our ideas and notation, with an eye that it will eventually become part of the documentation

Maybe throw `autograd` directly at variational circuits defined with the Fock backend?

I only just now learned about autograd, so it may be a stupid (pre-)idea, but:

Why not define a variational quantum circuit with SF and and let autograd do all the differentiation?

This might seem total absurd a first, but I imagine that, as the Fock backend uses pretty basic operations internally, this has a finite chance of success, at least for the fully Gaussian case.

Naming conventions: operation, ops, gate, expectation, observable,...

Currently (and I think this is to a good extent also my fault) we are using a rather inconsistent naming convention for the quantum operations:

Operation is the "super class", but is is also used directly for what one would usually call a "gate". The class Expectation is used for what is normally called an "observable". Such Expectations are defined in the sub-module expectation, but the Operations are defined in ops. Gates are accessible directly from the context of the main module, but observables reside in the sub-module expectation...

My suggestion would be the following:

Have an abstract base class Operation that would remain pretty much exactly what it is now. Then have two classes Gate and Observable for, well, gates and observables. Gate could be a trivial but non-abstract sub-class of Operation. Observable would likewise be a sub-class of Operation, would replace Expectation, and encapsulate the additional functionality that observables need but not gates (in a second step some of the functionality that is currently in Operation but only relevant for gates could be moved from Operation to Gate). The built-in gates and observables should then be defined in their respective sub-modules gates (what is now ops) and observables (what is now expectation).

If you don't want to make the syntax for returning expectation values too long (i.e., return qm.observables.PauliZ(wires=1) this is currently return qm.expectation.PauliZ(wires=1)) we can call the sub-module observabels just obs and for gates we can keep the convenient syntax qm.RZ(...) that allows to call them from the top-level context.

Calling the class for observables actually Observable rather than Expectation also makes it more intuitive that some of them will not return exact expectation values, but estimates from finite statistics.

I know that this will be painful because it means another refactoring, but I have the feeling it would make things much more transparent for the users...

Implement CV Gaussian gradients

Given our previous discussions and theoretical research results, we will implement automatic differentiation of Gaussian CV circuits. These are circuits that include Gaussian gate sets and measurements of the quadratures up to order 2.

Suggestion for circuit entry syntax: bar notation

Currently the circuit input syntax takes the gate parameters and the wires/subsystems on which to apply the gate like this:

qm.Gate(p1, p2, [wire1, wire2])

I'd propose using the vertical bar syntax, like we already do in Strawberrry Fields and ProjectQ:

qm.Gate(p1, p2) | [wire1, wire2]

It's about as fast to write, in my opinion a bit easier to read, and maybe less error-prone.

New plugin API method name proposal

Now that the suggested plugin API is merged (thanks @cgogolin and @co9olguy!), I propose a slight simplification of the method names:

pre_execute_queuedpre_execute

execute_queued_withexecute_with/execution_context/execution_context_manager (any preferences?)

pre_execute_operationspre_apply

post_execute_operationspost_apply. Note that this can be merged with pre_execute_expectations, as they both are consecutive in execution.

post_execute_expectationspost_expectations

post_execute_queuedpost_execute

Removing the 'queued' makes it easier to parse on first read, and potentially clearer, essentially since all three methods now take place independently of the queue (the queue is executed behind the scenes using apply().)

Determine the number of subsystems a gate acts on

While working on general unit tests for the plugins I ran into the following problem:

I wanted to implement a test that would run all circuits consisting of one gate and one measurement. The problem is: While I can easily figure out how many parameters a gate/observable wants/needs and generate random values for those, I do not know how to get the number of subsystems a gate is supposed to act on.

Is there a way to do this that I have overlooked? If not, I would suggest that the plugins should be forced to report this, for example via a subsystems(self, gaten_name) method that returns the number of subsystems per gate. I imagine it can be important to have programmatic access to this also outside of unittesting in things like genetic quantum algorithms, or to implement random circuits for approximate unitary designs and things like that.

Finalize optimizer API

Being a bit behind from the vacation, I see that we still have the Optimizer.train() API. Can we change this into having a base Optimizer class from which we build AdamOptimizer, GradientDescentOptimizer, NelderMeadOptimizer...

The crucial function of those is "step" which updates the weights by one step. The Optimizer stores
-the cost,
-the gradient function,
-the current weights,
-some past gradients if necessary
and has hyperparameters
-learning rate
-regulariser (although this could also be manually added to the cost)
-weight of momentum etc
-...

In particular, we should delete SGD from 0.1.

Additional plugins

This is for later, I suppose, but just to not forget: As we want to be platform agnostic it would be nice to have a plugin for a non-gate architecture. The recently released D-Wave Leap looks like a natural candidate. This could be a nice Mater's thesis...

Optimization interface

This issue reflects an interface change, away from a single Optimizer class which the user invokes with a keyword argument, towards something more akin to TensorFlow:

openqml.AdamOptimizer(args, kwargs, etc)

The Optimizer class would then wrap these individual methods/classes as a convenience to the user.

From @mariaschuld:

Nathan, Josh and I think that it would be best to define an optimizer as an object or method that defines to compute one step of updating the parameters, just like tensorflow does. We can then build the machine learning functionality around it in the next versions.

Issues:

  1. Is there any overhead created by autograd that we would like to be shared between steps? Can we somehow store it in the optimizer class?
  2. How can optimizers using past gradient information keep track of past steps?

Open questions:
Instead of relying on SciPy, shall we hand code the optimizers to reduce overheads? If yes, how do we deal with computational stability and such things?

Write more examples

Here are some examples that might be interesting and promote some of our research:

Optimization

  • elementary VQE (qubits)
  • photon redirection (qmodes, done by Ville)

ML using simple 2-d data with a quadratic decision boundary

  • QNN a la Farhi and Neven (qubits)
  • CV QNN (qmodes)
  • IBM Kernel classifier (qubits)
  • Squeezing Kernel classifier from our paper (qmodes)

What do you guys think? I'd keep them very basic, but are they still too advanced?

Another strategy would be to use examples that showcase the mixed classical/quantum node structure. Maybe we could do hybrid classical-quantum QNNs for qubits and qumodes.

Circuit construction, and passing arguments to quantum nodes

I'm currently looking at porting @smite's CV gradient code to the new_user_interface branch, but I've run into a bit of a problem.

The CV gradient code requires the ParRef/Variable to Operation mapping be known ahead of time, so I've been modifying the QNode to bring this in line with master.

Old approach:
In the previous approach, circuit construction (i.e. evaluating the quantum function and appending operations to the device queue) is done under the QNode.__call__() method. That way, the qfunc is directly run with the numeric values/lists/arrays passed by the user, and the QNode does not need to know any of the passing details.

Upside: allows for the user to define their qfunc however they want, using a combination of positional arguments, dictionaries, nested lists/NumPy arrays, and keyword arguments:

def circuit(x, y, my_list, z=6):
   qm.Operation(x,z,my_list[7], wires=1)

Downsides: I'm not sure if this is compatible with the CV gradient code, since the ParRef dependence is not known prior to the QNode being evaluated.

New approach:
In the new approach, circuit construction (filling the device queue) occurs within __init__(), before the user passes their arguments. Thus, I need to use inspect.signature to determine the arguments of the python function, create temporary Variable(idx) objects for each parameter, and build up the queue like so.

In this case, when QNode.__call__(*args) is called by the user, I simply update the Variable value with the corresponding *args, and then call self.device.execute().

However, this only works if we restrict the qfunc arguments to be positional arguments. For example, the above qfunc code above will now fail, since inspect.signature has no idea that my_list is a list, and what shape it is. This will produce the following error:

TypeError: 'Variable' object does not support indexing

I'm just wondering if anyone can see any solution, and a way to preserve the original argument behaviour?

Finalize CV gradients theory

I'd like to set a deadline on this. For better or for worse, we need to know soon whether it is viable to add the CV gradients to v0.1 of openqml. I'm going to set the date of Aug 12 for this.

If we do not have a very clear picture about how the code for this would look by then, we will go forward with only numerical differentiation for CV circuits.

Misleading error messages from quantum circuits

I've come across this situation several times: I am coding up a quantum circuit in openqml, something goes wrong, and I receive the error message:
openqml.device.DeviceError: A qfunc must always conclude with a classical expectation value.

Here is a MWE that causes the error:

import openqml as qm

dev = qm.device('default.qubit', wires=1)

@qm.qfunc(dev)
def circuit(x,y,z):
    qm.Rot(x,y,z, oops)
    return qm.expectation.PauliZ(0)
    
circuit(0.)

Obviously the error is actually caused by a syntax error inside the qfunc, and not by the expectation, so the error message is misleading.

A second error message occurs when the above code is entered into a terminal, the error is thrown, then the user types circuit(0.) a second time. Then I receive the error: DeviceError: Only one device can be active at a time. Obviously the openqml code is not failing gracefully. It would be better that if an error is thrown, the device still remains usable.

Gaussian backend claims it can do operations that are not implemented

Specifically this concerns:

  • 'DisplacedSqueezed'
  • 'TwoModeSqueezing'

At the same time, operations provided in ops.py are commented out, namely:

  • 'XDisplacement'
  • 'ZDisplacement'

I would fix it myself, but due to the move, I don't have the time this week, so I am at least taking a note.

I discovered these while working on general unit tests for the installed plugins here.

Call device.reset() only when necessary

Maybe I am missing something, but to me it looks like self.device.reset() is called explicitely in qnode.py and then again during __enter__() in Device. This causes problems for me in the ProjectQ plugin.
Can we get rid of the explicite call to reset() in qnode.py, please?

Serving credentials to plugins

OpenQML needs an infrastructure to pass on credentials (usernames, passwords, API tokens, ...) to plugins that require these to run code on external services. As we discussed on Slack, it is probably best to require the user to put these in a configuration file.

Questions:

  • How shall a plugin react if it needs credentials but didn't get them?
  • Maybe plugins should have a way of signaling to OpenQML that they need credentials? This would allow to automatically exclude them from unit tests and OpenQML could display an error message that explains where to put the credentials. Here one needs to keep in mind that a plugin may have different backends, only some of which might require credentials.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.