Coder Social home page Coder Social logo

pennylaneai / qml Goto Github PK

View Code? Open in Web Editor NEW
496.0 22.0 171.0 382.85 MB

Introductions to key concepts in quantum programming, as well as tutorials and implementations from cutting-edge quantum computing research.

Home Page: https://pennylane.ai/qml

License: Apache License 2.0

Makefile 0.11% CSS 0.11% Python 98.53% HTML 0.21% JavaScript 1.05%
quantum-machine-learning qml demo tutorials key-concepts quantum-computing automatic-differentiation pytorch tensorflow autograd

qml's Introduction

This repository contains materials on Quantum Machine Learning and other quantum computing topics, as well as Python code demos using PennyLane, a cross-platform Python library for differentiable programming of quantum computers.

The content here will be presented in the form of tutorial, demos and how-to's. Take a dive into quantum computing with fully-coded implementations of major works.

Explore these materials on our website: https://pennylane.ai. All tutorials are fully executable, and can be downloaded as Jupyter notebooks and Python scripts.

Contributing

You can contribute by submitting a demo via a pull request implementing a recent quantum computing paper/result.

Adding demos

  • Demos are written in the form of an executable Python script.

    • Packages listed in pyproject.toml will be available for import during execution. See section below on Dependency Management for more details.
    • Matplotlib plots will be automatically rendered and displayed on the QML website.

    Note: try and keep execution time of your script to within 10 minutes.

  • If you would like to write the demo using a Jupyter notebook, you can convert the notebook to the required executable Python format by following these steps.

  • All demos should have a file name beginning with tutorial_. The python files are saved in the demonstrations directory.

  • The new demos will avoid using autograd or TensorFlow, Jax and torch are recommended instead. Also, if possible, the use of lightning.qubit is recommended.

  • Restructured Text sections may be anywhere within the script by beginning the comment with 79 hashes (#). These are useful for breaking up large code-blocks.

  • Avoid the use of LaTeX macros. Even if they work in the deployment, they will not be displayed once the demo is published.

  • You may add figures within ReST comments by using the following syntax:

    ##############################################################################
    #.. figure:: ../_static/demonstrations_assets/<demo name>/image.png
    #    :align: center
    #    :width: 90%

    where <demo name> is a sub-directory with the name of your demo.

  • Add and select an author photo from the _static/authors folder. The image name should be as <author name>_<author surname>.<format>. If this is a new author and their image is not a headshot, store the original image as <author name>_<author surname>_original.<format> and create a cropped headshot with the aforementioned name.

  • In the same folder create a <author name>.txt file where to include the bio following this structure:

    .. bio:: <author name> <author surname>
     :photo: ../_static/authors/<author name>_<author surname>.<format>
    
     <author's bio>

    Note that if you want to include a middle name, it must be included in both the first and second line and in the file name.

  • Your bio will be added at the end of the demo automatically. Don't forget to end with the following line

    ##############################################################################
    # About the author
    # ----------------
    # 
  • Lastly, your demo will need an accompanying metadata file. This file should be named the same as your python file, but with the .py extension replaced with .metadata.json. Check out the demonstrations_metadata.md file in this repo for details on how to format that file and what to include.

  • At this point, run your script through the Black Python formatter,

    pip install black
    black -l 100 demo_new.py
  • Finally, add the metadata. The metadata is a json file in which we will store information about the demo. In this example you will see the fields you need to fill in.

    • Make sure the file name is <name of your tutorial>.metadata.json.
    • The "id" of the author will be the same as the one you chose when creating the bio.
    • The date of publication and modification. Leave them empty in case you don't know them.
    • Choose the categories your demo fits into: "Getting Started", "Optimization", "Quantum Machine Learning", "Quantum Chemistry", "Devices and Performance", "Quantum Computing", "Quantum Hardware" or "Algorithms". Feel free to add more than one.
    • In previewImages you should simply modify the final part of the file's name to fit the name of your demo. These two images will be sent to you once the review process begins. Once sent, you must upload them to the address indicated in the metadata.
    • relatedContent refers to the demos related to yours. You will have to put the corresponding id and set the weight to 1.0.
    • If there is any doubt with any field, do not hesitate to post a comment to the reviewer of your demo.

    Don't forget to validate your metadata file as well.

    pip install check-jsonschema 'jsonschema[format]'
    check-jsonschema \
      --schemafile metadata_schemas/demo.metadata.schema.<largest_number>.json \
      demonstrations/<your_demo_name>.metadata.json

    and you are ready to submit a pull request!

In order to see the demo on the deployment, you can access through the url. For this, once deployed, you should change index.html to demos/<name of your tutorial>.html in the url. If your demo uses the latest release of PennyLane, simply make your PR against the master branch. If you instead require the cutting-edge development versions of PennyLane or any relevant plugins, make your PR against the dev branch instead.

By submitting your demo, you consent to our Privacy Policy.

Tutorial guidelines

While you are free to be as creative as you like with your demo, there are a couple of guidelines to keep in mind.

  • All contributions must be made under the Apache 2.0 license.

  • The title should be clear and concise, and if based on a paper it should be similar to the paper that is being implemented.

  • All demos should include a summary below the title. The summary should be 1-3 sentences that makes clear the goal and outcome of the demo, and links to any papers/resources used.

  • Code should be clearly commented and explained, either as a ReST-formatted comment or a standard Python comment.

  • If your content contains random variables/outputs, a fixed seed should be set for reproducibility.

  • All content must be original or free to reuse subject to license compatibility. For example, if you are implementing someone else's research, reach out first to recieve permission to reproduce exact figures. Otherwise, avoid direct screenshots from papers, and instead refer to figures in the paper within the text.

  • All submissions must pass code review before being merged into the repository.

Dependency Management

Due to the large scope of requirements in this repository, the traditional requirements.txt file is being phased out and pyproject.toml is being introduced instead, the goal being easier management in regard to adding/updating packages.

To install all the dependencies locally, poetry needs to be installed. Please follow the official installation documentation.

Installing dependencies

Once poetry has been installed, the dependencies can be installed as follows:

make environment

Note: This makefile target calls poetry install under the hood, you can pass any poetry arguments to this by passing the POETRYOPTS variable.

make environment POETRYOPTS='--sync --dry-run --verbose'

The master branch of QML uses the latest stable release of PennyLane, whereas the dev branch uses the most up-to-date version from the GitHub repository. If your demo relies on that, install the dev dependencies instead by upgrading all PennyLane and its various plugins to the latest commit from GitHub.

# Run this instead of running the command above
make environment UPGRADE_PL=true

Installing only the dependencies to build the website without executing demos

It is possible to build the website without executing any of the demo code using make html-norun (More details below).

To install only the base dependencies without the executable dependencies, use:

make environment BASE_ONLY=true

(This is the equivalent to the previous method of pip install -r requirements_norun.txt).

Adding / Editing dependencies

All dependencies need to be added to the pyproject.toml. It is recommended that unless necessary, all dependencies be pinned to as tight of a version as possible.

Add the new dependency in the [tool.poetry.group.executable-dependencies.dependencies] section of the toml file.

Once pyproject.toml files have been updated, the poetry.lock file needs to be refreshed:

poetry lock --no-update

This command will ensure that there are no dependency conflicts with any other package, and everything works.

The --no-update ensures existing package versions are not bumped as part of the locking process.

If the dependency change is required in prod, open the PR against master, or if it's only required in dev, then open the PR against the dev branch, which will be synced to master on the next release of PennyLane.

Adding / Editing PennyLane (or plugin) versions

This process is slightly different from other packages. It is due to the fact that the master builds use the stable releases of PennyLane as stated in the pyproject.toml file. However, for dev builds, we use the latest commit from GitHub.

Adding a new PennyLane package (plugin)
  • Add the package to pyproject.toml file with the other pennylane packages and pin it to the latest stable release.
  • Add the GitHub installation link to the Makefile, so it is upgraded for dev builds with the other PennyLane packages.
    • This should be under the format $$PYTHON_VENV_PATH/bin/python -m pip install --upgrade git+https://github.com/PennyLaneAI/<repo>.git#egg=<repo>;\
  • Refresh the poetry lock file by running poetry lock

Building

To build the website locally, simply run make html. The rendered HTML files will now be available in _build/html. Open _build/html/index.html to browse the built site locally.

Note that the above command may take some time, as all demos will be executed and built! Once built, only modified demos will be re-executed/re-built.

Alternatively, you may run make html-norun to build the website without executing demos, or build only a single demo using the following command:

sphinx-build -D sphinx_gallery_conf.filename_pattern=tutorial_QGAN\.py -b html . _build

where tutorial_QGAN should be replaced with the name of the demo to build.

Building and running locally on Mac (M1)

To install dependencies on an M1 Mac and build the QML website, the following instructions may be useful.

  • If python3 is not currently installed, we recommend you install via Homebrew:

    brew install python
  • Follow the steps from Dependency Management to setup poetry.

  • Install the base packages by running

    make environment BASE_ONLY=true

    Alternatively, you can do this in a new virtual environment using

    python -m venv [venv_name]
    cd [venv_name] && source bin/activate
    make environment BASE_ONLY=true

Once this is complete, you should be able to build the website using make html-norun. If this succeeds, the build folder should be populated with files. Open index.html in your browser to view the built site.

If you are running into the error message

command not found: sphinx-build

you may need to make the following change:

  • In the Makefile change SPHINXBUILD = sphinx-build to SPHINXBUILD = python3 -m sphinx.cmd.build.

If you are running into the error message

ModuleNotFoundError: No module named 'the-module-name'

you may need to install the module manually:

pip3 install the-module-name

Support

If you are having issues, please let us know by posting the issue on our GitHub issue tracker.

We are committed to providing a friendly, safe, and welcoming environment for all. Please read and respect the Code of Conduct.

License

The materials and demos in this repository are free and open source, released under the Apache License, Version 2.0.

The file custom_directives.py is available under the BSD 3-Clause License with Copyright (c) 2017, Pytorch contributors.

qml's People

Contributors

actions-user avatar agran2018 avatar albi3ro avatar alvaro-at-xanadu avatar andreamari avatar andrewgardhouse avatar antalszava avatar ashishks0522 avatar bm7878 avatar catalinaalbornoz avatar co9olguy avatar dependabot[bot] avatar dwierichs avatar glassnotes avatar ikurecic avatar isaacdevlugt avatar ixfoduap avatar jaybsoni avatar josh146 avatar ketpuntog avatar lillian542 avatar lucaman99 avatar lucassilbernagel avatar mariaschuld avatar qottmann avatar rashidnhm avatar rmoyard avatar soranjh avatar thisac avatar trbromley avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qml's Issues

Multiclass margin classifier demo has a very long execution time

The Multiclass margin classifier currently takes 18 minutes to execute, which is significantly too long (in the readme, we recommend no longer than 10 minutes at most).

If there is no performance optimization we can make to the code, I recommend we convert it into a non-executable tutorial, similarly to the QNN function fitting demo.

This involves:

  • Removing tutorial_ from the filename

  • Hardcoding in the output as a Python comment, like so:

##############################################################################
# .. rst-class:: sphx-glr-script-out
#
#  Out:
#
#  .. code-block:: none
#
#     First X sample (original)  : tensor([5.1000, 3.5000, 1.4000, 0.2000], dtype=torch.float64)
#     First X sample (normalized): tensor([0.8038, 0.5516, 0.2206, 0.0315], dtype=torch.float64)
#     Num params:  111
#     Iter:     1 | Cost: 0.3920648 | Acc train: 0.6428571 | Acc test: 0.5789474
#     Iter:     2 | Cost: 0.4511958 | Acc train: 0.3392857 | Acc test: 0.3157895
#     Iter:     3 | Cost: 0.3106308 | Acc train: 0.3392857 | Acc test: 0.3157895
  • Saving the generated plot, and hardcoding it into the Python script.

[DEMO] Iris Classification using Angle Embedding and qml.qnn.KerasLayer

General information

Name
Hemant Gahankari

Affiliation (optional)
None

Demo information

Title
Iris Classification using Angle Embedding and qml.qnn.KerasLayer

Abstract
This example is created to expalin how to pass classical data into the quantum function and convert it to quantum data. Later it shows how to create qml.qnn.KerasLayer from qnode and train it and also check model performance.

Relevant links
https://colab.research.google.com/drive/13PvS2D8mxBvlNw6_5EapUU2ePKdf_K53#scrollTo=1fJWDX5LxfvB

Force browsers to clear cache

Often new demos are not viewable unless the user manually clears the browser cache. This makes it annoying to advertise new tutorials, because some users will think they're "not there"

Search bar

The collection of PennyLane demos is amazing! Yet as it continues to grow, it's becoming difficult to find specific demos. We have a nice structure organizing them into categories, and useful filters for specific topics, both of which are very helpful. Yet it often still takes time to locate specific tutorials.

Adding a search bar to the page would make it much easier to navigate and to quickly identify the location of specific demos. For example, where do I find demos on barren plateaus? On kernel methods? On VQE in different spin sectors?

Screenshot from 2021-03-08 17-05-16

[BUG] Does Tensorflow save models that have Quantum Layers?

Issue description

with import os a fully trained keras layers model will save in .pb tensorflow format with the command
model.save('KerasModel')
putting into the folder KerasModel a saved_model.pb file and also the folders variables and assets. with related binary files.

Example Github code here based on qml/demonstration/tutorial_qnn_module_tf here

If we insert a quantum layer in the keras model the files are not saved. Possibly it is an issue with extra options as the error messages suggest, but I think the qml code is causing a fault in the file structure.

Any suggestions, by the way a .h5 file can be saved but I think it is corrupted since I can't convert it into anything else.

Error messages



python3 tf-tutorial_qnn_module_tf.py 
2020-11-30 22:40:35.291489: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2020-11-30 22:40:35.291535: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2020-11-30 22:40:37.258662: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2020-11-30 22:40:37.258710: W tensorflow/stream_executor/cuda/cuda_driver.cc:312] failed call to cuInit: UNKNOWN ERROR (303)
2020-11-30 22:40:37.258740: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (ws-835a16b2-46f7-402c-9290-1a69f9673f9d): /proc/driver/nvidia/version does not exist
2020-11-30 22:40:37.259109: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-11-30 22:40:37.274032: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2200000000 Hz
2020-11-30 22:40:37.276475: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55db3f649bd0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-11-30 22:40:37.276516: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
Epoch 1/2
30/30 - 19s - loss: 0.3244 - accuracy: 0.7667 - val_loss: 0.2635 - val_accuracy: 0.7800
Epoch 2/2
30/30 - 19s - loss: 0.2067 - accuracy: 0.8467 - val_loss: 0.2365 - val_accuracy: 0.7800
Traceback (most recent call last):
  File "tf-tutorial_qnn_module_tf.py", line 166, in <module>
    model.save('QuantumAndKerasModel') 
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 1979, in save
    signatures, options)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py", line 134, in save_model
    signatures, options)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save.py", line 80, in save
    save_lib.save(model, filepath, signatures, options)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py", line 976, in save
    obj, export_dir, signatures, options, meta_graph_def)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py", line 1047, in _build_meta_graph
    checkpoint_graph_view)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/saved_model/signature_serialization.py", line 75, in find_function_to_export
    functions = saveable_view.list_functions(saveable_view.root)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/saved_model/save.py", line 145, in list_functions
    self._serialization_cache)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 2590, in _list_functions_for_serialization
    Model, self)._list_functions_for_serialization(serialization_cache)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 3019, in _list_functions_for_serialization
    .list_functions_for_serialization(serialization_cache))
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py", line 87, in list_functions_for_serialization
    fns = self.functions_to_serialize(serialization_cache)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 79, in functions_to_serialize
    serialization_cache).functions_to_serialize)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 95, in _get_serialized_attributes
    serialization_cache)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/model_serialization.py", line 51, in _get_serialized_attributes_internal
    default_signature = save_impl.default_save_signature(self.obj)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 205, in default_save_signature
    fn.get_concrete_function()
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 1167, in get_concrete_function
    concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 1073, in _get_concrete_function_garbage_collected
    self._initialize(args, kwargs, add_initializers_to=initializers)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 697, in _initialize
    *args, **kwds))
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2855, in _get_concrete_function_internal_garbage_collected
    graph_function, _, _ = self._maybe_define_function(args, kwargs)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 3213, in _maybe_define_function
    graph_function = self._create_graph_function(args, kwargs)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 3075, in _create_graph_function
    capture_by_value=self._capture_by_value),
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
    func_outputs = python_func(*func_args, **func_kwargs)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 600, in wrapped_fn
    return weak_wrapped_fn().__wrapped__(*args, **kwds)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/saving/saving_utils.py", line 134, in _wrapped_model
    outputs = model(inputs, training=False)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 985, in __call__
    outputs = call_fn(inputs, *args, **kwargs)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/engine/sequential.py", line 372, in call
    return super(Sequential, self).call(inputs, training=training, mask=mask)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py", line 386, in call
    inputs, training=training, mask=mask)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py", line 508, in _run_internal_graph
    outputs = node.layer(*args, **kwargs)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 985, in __call__
    outputs = call_fn(inputs, *args, **kwargs)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/autograph/impl/api.py", line 302, in wrapper
    return func(*args, **kwargs)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/pennylane/qnn/keras.py", line 311, in call
    for x in inputs:  # iterate over batch
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 503, in __iter__
    self._disallow_iteration()
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 494, in _disallow_iteration
    self._disallow_when_autograph_disabled("iterating over `tf.Tensor`")
  File "/workspace/.pip-modules/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 469, in _disallow_when_autograph_disabled
    " Try decorating it directly with @tf.function.".format(task))
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph is disabled in this function. Try decorating it directly with @tf.function.

[BUG] Quanvolution demo results don't match rendered version

First noted by a user here, and confirmed independently by me using a fresh environment with the most recent version of PennyLane and contents of the qml repo requirements file.

The plots produced at the end (left) don't match the results in the demo (right). The user obtained identical plots to my locally run version (given this, and the consistency of the results, the issue doesn't appear to be one that stems from random seeds or anything).

image

Provide tutorial requirement information

The demonstrations page should have a section at the bottom, providing the Python environment details used to build the tutorials.

For example, something like the following:

.. rubric:: Running the tutorials

    All tutorials are built using the following requirements.

We could either include directly the text within requirements.txt (will probably look ugly), or just link to the file on GitHub.

Updating reference format in existing pages

Currently we have a mixture of links to web media (e.g., wikipedia pages) and academic style citations (which appear in the docs as, e.g., [R4]). The academic citatation style does not fit well because:

  • on any given page, only a subset of references are made, so the ordering can be disconcerting
  • the actual style of the referencing does not fit well into the style of the rest of the text (e.g., when we have see [R4], it would not be clear what [R4] is, but something like see [Schuld & Killoran 2019] would be more obvious.
  • clicking on a citation just takes you to a page of references, not to the desired reference itself

It would be good to decide a more natural and seamless way to cite academic works without breaking out of the overall style we are aiming for with these pages

Quantum circuit learning to compute option prices and their sensitivities

General information

Name
Takayuki Sakuma

Affiliation (optional)
Soka University

Twitter (optional)
Your Twitter username, if interested; helps us advertise your demo while linking directly back to you.

Image (optional)
Suggested image to use when advertising your demo on Twitter; can be provided via hyperlink or by copy/pasting directly in GitHub.


Demo information

Title
Quantum circuit learning to compute option prices and their sensitivities

Abstract
Quantum circuit learning is applied to computing option prices and their sensitivities. The advantage of this method is that a suitable choice of quantum circuit architecture makes it possible to compute the sensitivities analytically by applying parameter-shift rules.

Relevant links
link to demo
https://github.com/ta641/option_QCL/blob/master/qclop_tutorial.ipynb

link to paper
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3922040
SSRN-id3922040.pdf

[DEMO] Trainable Quantum Convolution

General information

Names
Denny Mattern ([email protected])
Darya Martyniuk ([email protected])
Fabian Bergmann ([email protected])
Henri Willems ([email protected])

Affiliation
Data Analytics Center at Fraunhofer Institute for Open Communication Systems (Fraunhofer FOKUS) [1]. This demo results from our research as part of the PlanQK consortium [2].


Demo information

Title
Trainable Quantum Convolution

Abstract
We implement a trainable version of Quanvolutional Neural Networks [3] using parametrized RandomCircuits. Parameters are optimized using standard gradient descent. Our code is based on the "Quanvolutional Neural Networks"-Demo of Andrea Mari [4].

Code
https://github.com/PlanQK/TrainableQuantumConvolution

Relevant links
[1] https://www.fokus.fraunhofer.de and https://www.data-analytics-center.org.

[2] https://www.planqk.de.

[3] Maxwell Henderson, Samriddhi Shakya, Shashindra Pradhan, Tristan Cook, "Quanvolutional Neural Networks: Powering Image Recognition with Quantum Circuits", 2019, arxiv:1904.04767.

[4] https://pennylane.ai/qml/demos/tutorial_quanvolution.html.

[DEMO] A Quantum-Enhanced LSTM Layer

General information

Name
Riccardo Di Sipio

Affiliation (optional)
Ideal.com (O5 Systems)


Demo information

Title
[DEMO] A Quantum-Enhanced LSTM Layer

Abstract
In Natural Language Processing, documents are usually presented as sequences of words. One of the most successful techniques to manipulate this kind of data is the Recurrent Neural Network architecture, and in particular a variant called Long Short-Term Memory (LSTM). Using the PennyLane library and its PyTorch interface, one can easily define a LSTM network where Variational Quantum Circuits (VQCs) replace linear operations. An application to Part-of-Speech tagging is presented in this tutorial.

Relevant links
GitHub repository: https://github.com/rdisipio/qlstm
Post on Towards Data Science: https://towardsdatascience.com/a-quantum-enhanced-lstm-layer-38a8c135dbfa

Error in tutorial_quanvolution.py

Hi,
When I try "python tutorial_quanvolution.py ", it echos this problem. Could anyone can help me?
Thanks.
2020-04-17 15:35:02.466789: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /public/software/gnu/gcc/v7.2.0/lib64:/public/software/nvidia/cudnn/9.0/lib64:/usr/lib64/nvidia:/usr/local/cuda-9.0/lib64:/usr/local/cuda-9.0/nvvm/lib64:/opt/gridview//pbs/dispatcher/lib::/usr/local/lib64:/usr/local/lib
2020-04-17 15:35:02.491135: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /public/software/gnu/gcc/v7.2.0/lib64:/public/software/nvidia/cudnn/9.0/lib64:/usr/lib64/nvidia:/usr/local/cuda-9.0/lib64:/usr/local/cuda-9.0/nvvm/lib64:/opt/gridview//pbs/dispatcher/lib::/usr/local/lib64:/usr/local/lib
2020-04-17 15:35:02.491172: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 1s 0us/step
Quantum pre-processing of train images:
50/50
Quantum pre-processing of test images:
Traceback (most recent call last):
File "tutorial_quanvolution.py", line 224, in
np.save(SAVE_PATH + "q_train_images.npy", q_train_images)
File "/home/zhangxiaochu/zharj/.local/lib/python3.6/site-packages/autograd/tracer.py", line 48, in f_wrapped
return f_raw(*args, **kwargs)
File "/public/software/bio/anaconda3/lib/python3.6/site-packages/numpy/lib/npyio.py", line 524, in save
fid = open(file, "wb")
FileNotFoundError: [Errno 2] No such file or directory: 'quanvolution/q_train_images.npy'

Improve user navigation through demos

Following the site re-design, it is now straightforward for a user landing on the main page (or main demo page) to find the entry point to the basic demos. However, there is not much guidance on where to go next.

It would be nice to have a mechanism for navigating a user through various sequences of related tutorials. The 'related demos' in the side bar is a starting point for this, but we can also do something more concrete.

For example:

  • provide navigation links to "previous/next demo" at the bottom of demos; more specific than just having "related demos"
  • provide some curated lists on the main demo page. For example, clicking a button that says "interested in quantum chemistry?" will provide a list of recommend demos focused on qchem, and any prerequisite demos, to work through
  • a collapsible sidebar on the left that displays a demo's location relative to other demos in a list, as shown in the graphic below. This would be the most challenging to organize and implement, but might also provide the best experience because it allows a user to see more context about what the demo contents may be used for

Image from iOS

[BUG] Can't save a Pennylane-Tensorflow hybrid model

Before posting an issue

Search existing GitHub issues to make sure the issue does not already exist:
https://github.com/XanaduAI/qml/issues

Delete everything above the dashed line, and fill in the template.


Issue description

For a Keras-Pennylane hybrid model, model.save() doesn't work and suggests to use raises x.shape as opposed to len(x). Since it's not in my code, I am assuming it's in Pennylane Tensorflow plug-in.

  • Expected behavior: save my trained model

  • Actual behavior: didn't save and asked me to use x.shape instead of len(x)

  • System information: MacOS, Python 3.8.5

Source code and tracebacks

Please include any additional code snippets and error tracebacks related
to the issue here.

Additional information

Any additional information, configuration or data that might be necessary
to reproduce the issue.

TensorFlow equivalent of tutorial_quantum_transfer_learning.py

Issue description

I'm trying to create a TensorFlow-based equivalent of tutorial_quantum_transfer_learning.py but I get an error related to the conversion of Pennylane objects to Tensors.

  • Expected behavior: Pass values correctly?

  • Actual behavior: Returns the following error:
    ValueError: Attempt to convert a value (<error computing repr()>) with an unsupported type (<class 'pennylane.interfaces.tf.TFQNode..qnode_str'>) to a Tensor.

  • Reproduces how often: (What percentage of the time does it reproduce?)
    I'm not really sure right now what is causing the problem. The stack trace does not point to any line of my code that can create the offence. See below.

  • System information: (include your operating system, Python version, Sphinx version, etc.)
    MacOsX 10.14
    TensorFlow 2.0
    PennyLane 0.7.0

Source code and tracebacks

Training...
Train on 11640 samples, validate on 2910 samples
Epoch 1/20
  128/11640 [..............................] - ETA: 1:17Traceback (most recent call last):
  File "./train.py", line 69, in <module>
    callbacks=[],
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 728, in fit
    use_multiprocessing=use_multiprocessing)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 324, in fit
    total_epochs=epochs)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 123, in run_one_epoch
    batch_outs = execution_function(iterator)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 86, in execution_function
    distributed_function(input_fn))
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 73, in distributed_function
    per_replica_function, args=(model, x, y, sample_weights))
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py", line 760, in experimental_run_v2
    return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py", line 1787, in call_for_each_replica
    return self._call_for_each_replica(fn, args, kwargs)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/distribute/distribute_lib.py", line 2132, in _call_for_each_replica
    return fn(*args, **kwargs)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/autograph/impl/api.py", line 258, in wrapper
    return func(*args, **kwargs)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 264, in train_on_batch
    output_loss_metrics=model._output_loss_metrics)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py", line 311, in train_on_batch
    output_loss_metrics=output_loss_metrics))
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py", line 252, in _process_single_batch
    training=training))
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_eager.py", line 127, in _model_loss
    outs = model(inputs, **kwargs)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 891, in __call__
    outputs = self.call(cast_inputs, *args, **kwargs)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 708, in call
    convert_kwargs_to_constants=base_layer_utils.call_context().saving)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/network.py", line 860, in _run_internal_graph
    output_tensors = layer(computed_tensors, **kwargs)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 891, in __call__
    outputs = self.call(cast_inputs, *args, **kwargs)
  File "/Users/Riccardo/development/qnlp/models.py", line 126, in call
    q = tf.dtypes.cast(q, tf.float32)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/util/dispatch.py", line 180, in wrapper
    return target(*args, **kwargs)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/ops/math_ops.py", line 702, in cast
    x = ops.convert_to_tensor(x, name="x")
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1184, in convert_to_tensor
    return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1242, in convert_to_tensor_v2
    as_ref=False)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 1296, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py", line 286, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py", line 227, in constant
    allow_broadcast=True)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py", line 235, in _constant_impl
    t = convert_to_eager_tensor(value, ctx, dtype)
  File "/Users/Riccardo/development/qc/lib/python3.6/site-packages/tensorflow_core/python/framework/constant_op.py", line 96, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Attempt to convert a value (<error computing repr()>) with an unsupported type (<class 'pennylane.interfaces.tf.TFQNode.<locals>.qnode_str'>) to a Tensor.

Additional information

It would be nice if you could provide some more examples of interplay between "classical" and quantum ML using TensorFlow instead of (or beside) PyTorch.

QAOA Maxcut currently failing on the latest version of PennyLane and autograd

The QAOA maxcut tutorial is currently failing with the following traceback:

  File "implementations/tutorial_qaoa_maxcut.py", line 208, in circuit
    U_C(gammas[i])
IndexError: index 1 is out of bounds for axis 0 with size 1

The issue seems to be passing the parameter gamma. At line 246,

neg_obj -= 0.5 * (1 - circuit(gammas, betas, edge=edge, n_layers=n_layers))

it can be seen that gamma = Autograd ArrayBox with value [0.00617866 0.00415137]. However, within the QNode at line 202,

@qml.qnode(dev)
def circuit(gammas, betas, edge=None, n_layers=1):
    # apply Hadamards to get the n qubit |+> state
    [print(2, i.val) for i in gammas if isinstance(i, qml.variable.Variable)]

we have gamma = [0.006178660018330691] - the second element is 'missing'.

[DEMO] Subspace Search Variational Quantum Eigensolver

General information

Name
Shah Ishmam Mohtashim
Turbasu Chatterjee
Arnav Das

Twitter (optional)
@IshmamShah
@Turbasu_Chatt
@arnavdas88


Demo information

Title
Subspace Search Variational Quantum Eigensolver

Abstract
The variational quantum eigensolver (VQE) is an algorithm for searching the ground state of a quantum system. The SSVQE uses a simple technique to find the excited energy states by transforming the |0⋯0⟩ to the ground state, and another orthogonal basis state |0⋯1⟩ to the first excited state and so on. As a demonstration, the weighted SSVQE is used to find out the excited states of a Transverse Ising model with 4 spins and that of the Hydrogen molecule.

Relevant links
https://github.com/LegacYFTw/SSVQE
https://arxiv.org/abs/1810.09434

CNN for 3D

Hi,
The tutorial_quanvolution.py is very useful for the quantum CNN in 2D data, such as images. Does anyone know materials or tutorials that could be used for the 3D quantum CNN?
Thanks very much.
All the best.
Rujing

Quantum transfer learning tutorial has incorrect optimizer step order

Running the quantum transfer learning tutorial with a version of PyTorch newer than 1.1.0 provides the following warning:

/home/circleci/project/venv/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:100:
    UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`.
    In PyTorch 1.1.0 and later, you should call them in the opposite order:
    `optimizer.step()` before `lr_scheduler.step()`.  Failure to do this will result in
    PyTorch skipping the first value of the learning rate schedule.
    See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate

This should be fixed, to avoid this breaking in newer PyTorch versions.

[BUG] Noisy circuits demo doesn't work when run on Braket devices

Issue description

In the noisy circuits demo, I get an unexpected error in the last cell when replacing the device with
dev = qml.device('braket.local.qubit', backend="braket_dm", wires=2)

There are other unrelated errors in other cells because the Braket device doesn't implement dev.state, which is outside the scope of this issue.

  • Expected behavior: (What you expect to happen)
    No errors in the last cell.

  • Actual behavior: (What actually happens)
    Error parsing ndarray; see below.

  • System information: (include your operating system, Python version, Sphinx version, etc.)
    Run on Amazon Braket notebook instance, Python version 3.7.10, PennyLane version 0.16.0

Source code and tracebacks

TypeError                                 Traceback (most recent call last)
<ipython-input-11-d0a359c35d19> in <module>
      4 
      5 for i in range(steps):
----> 6     (x, ev), cost_val = opt.step_and_cost(cost, x, ev)
      7     if i % 5 == 0 or i == steps - 1:
      8         print(f"Step: {i}    Cost: {cost_val}")

~/anaconda3/envs/Braket/lib/python3.7/site-packages/pennylane/optimize/gradient_descent.py in step_and_cost(self, objective_fn, grad_fn, *args, **kwargs)
     68         """
     69 
---> 70         g, forward = self.compute_grad(objective_fn, args, kwargs, grad_fn=grad_fn)
     71         new_args = self.apply_grad(g, args)
     72 

~/anaconda3/envs/Braket/lib/python3.7/site-packages/pennylane/optimize/gradient_descent.py in compute_grad(objective_fn, args, kwargs, grad_fn)
    125         """
    126         g = get_gradient(objective_fn) if grad_fn is None else grad_fn
--> 127         grad = g(*args, **kwargs)
    128         forward = getattr(g, "forward", None)
    129 

~/anaconda3/envs/Braket/lib/python3.7/site-packages/pennylane/_grad.py in __call__(self, *args, **kwargs)
     99         """Evaluates the gradient function, and saves the function value
    100         calculated during the forward pass in :attr:`.forward`."""
--> 101         grad_value, ans = self._get_grad_fn(args)(*args, **kwargs)
    102         self._forward = ans
    103         return grad_value

~/anaconda3/envs/Braket/lib/python3.7/site-packages/autograd/wrap_util.py in nary_f(*args, **kwargs)
     18             else:
     19                 x = tuple(args[i] for i in argnum)
---> 20             return unary_operator(unary_f, x, *nary_op_args, **nary_op_kwargs)
     21         return nary_f
     22     return nary_operator

~/anaconda3/envs/Braket/lib/python3.7/site-packages/pennylane/_grad.py in _grad_with_forward(fun, x)
    116         difference being that it returns both the gradient *and* the forward pass
    117         value."""
--> 118         vjp, ans = _make_vjp(fun, x)
    119 
    120         if not vspace(ans).size == 1:

~/anaconda3/envs/Braket/lib/python3.7/site-packages/autograd/core.py in make_vjp(fun, x)
      8 def make_vjp(fun, x):
      9     start_node = VJPNode.new_root()
---> 10     end_value, end_node =  trace(start_node, fun, x)
     11     if end_node is None:
     12         def vjp(g): return vspace(x).zeros()

~/anaconda3/envs/Braket/lib/python3.7/site-packages/autograd/tracer.py in trace(start_node, fun, x)
      8     with trace_stack.new_trace() as t:
      9         start_box = new_box(x, t, start_node)
---> 10         end_box = fun(start_box)
     11         if isbox(end_box) and end_box._trace == start_box._trace:
     12             return end_box._value, end_box._node

~/anaconda3/envs/Braket/lib/python3.7/site-packages/autograd/wrap_util.py in unary_f(x)
     13                 else:
     14                     subargs = subvals(args, zip(argnum, x))
---> 15                 return fun(*subargs, **kwargs)
     16             if isinstance(argnum, int):
     17                 x = args[argnum]

<ipython-input-9-df7c2f6664e8> in cost(x, target)
      1 def cost(x, target):
----> 2     return (damping_circuit(x) - target[0])**2

~/anaconda3/envs/Braket/lib/python3.7/site-packages/pennylane/qnode.py in __call__(self, *args, **kwargs)
    596 
    597         # execute the tape
--> 598         res = self.qtape.execute(device=self.device)
    599 
    600         # if shots was changed

~/anaconda3/envs/Braket/lib/python3.7/site-packages/pennylane/tape/tape.py in execute(self, device, params)
   1321             params = self.get_parameters()
   1322 
-> 1323         return self._execute(params, device=device)
   1324 
   1325     def execute_device(self, params, device):

~/anaconda3/envs/Braket/lib/python3.7/site-packages/autograd/tracer.py in f_wrapped(*args, **kwargs)
     42             parents = tuple(box._node for _     , box in boxed_args)
     43             argnums = tuple(argnum    for argnum, _   in boxed_args)
---> 44             ans = f_wrapped(*argvals, **kwargs)
     45             node = node_constructor(ans, f_wrapped, argvals, kwargs, argnums, parents)
     46             return new_box(ans, trace, node)

~/anaconda3/envs/Braket/lib/python3.7/site-packages/autograd/tracer.py in f_wrapped(*args, **kwargs)
     46             return new_box(ans, trace, node)
     47         else:
---> 48             return f_raw(*args, **kwargs)
     49     f_wrapped.fun = f_raw
     50     f_wrapped._is_autograd_primitive = True

~/anaconda3/envs/Braket/lib/python3.7/site-packages/pennylane/interfaces/autograd.py in _execute(self, params, device)
    163         # evaluate the tape
    164         self.set_parameters(self._all_params_unwrapped, trainable_only=False)
--> 165         res = self.execute_device(params, device=device)
    166         self.set_parameters(self._all_parameter_values, trainable_only=False)
    167 

~/anaconda3/envs/Braket/lib/python3.7/site-packages/pennylane/tape/tape.py in execute_device(self, params, device)
   1352 
   1353         if isinstance(device, qml.QubitDevice):
-> 1354             res = device.execute(self)
   1355         else:
   1356             res = device.execute(self.operations, self.observables, {})

~/anaconda3/envs/Braket/lib/python3.7/site-packages/braket/pennylane_plugin/braket_device.py in execute(self, circuit, **run_kwargs)
    189     def execute(self, circuit: CircuitGraph, **run_kwargs) -> np.ndarray:
    190         self.check_validity(circuit.operations, circuit.observables)
--> 191         self._circuit = self._pl_to_braket_circuit(circuit, **run_kwargs)
    192         self._task = self._run_task(self._circuit)
    193         return self._braket_to_pl_result(self._task.result(), circuit)

~/anaconda3/envs/Braket/lib/python3.7/site-packages/braket/pennylane_plugin/braket_device.py in _pl_to_braket_circuit(self, circuit, **run_kwargs)
    139             circuit.operations,
    140             rotations=None,  # Diagonalizing gates are applied in Braket SDK
--> 141             **run_kwargs,
    142         )
    143         for observable in circuit.observables:

~/anaconda3/envs/Braket/lib/python3.7/site-packages/braket/pennylane_plugin/braket_device.py in apply(self, operations, rotations, **run_kwargs)
    203         for operation in operations + rotations:
    204             params = [p.numpy() if isinstance(p, np.tensor) else p for p in operation.parameters]
--> 205             gate = translate_operation(operation, params)
    206             dev_wires = self.map_wires(operation.wires).tolist()
    207             ins = Instruction(gate, dev_wires)

~/anaconda3/envs/Braket/lib/python3.7/site-packages/braket/pennylane_plugin/translation.py in translate_operation(operation, parameters)
     61         Gate: The Braket gate corresponding to the given operation
     62     """
---> 63     return _translate_operation(operation, parameters)
     64 
     65 

~/anaconda3/envs/Braket/lib/python3.7/functools.py in wrapper(*args, **kw)
    838                             '1 positional argument')
    839 
--> 840         return dispatch(args[0].__class__)(*args, **kw)
    841 
    842     funcname = getattr(func, '__name__', 'singledispatch function')

~/anaconda3/envs/Braket/lib/python3.7/site-packages/braket/pennylane_plugin/translation.py in _(amplitude_damping, parameters)
    169 def _(amplitude_damping: qml.AmplitudeDamping, parameters):
    170     gamma = parameters[0]
--> 171     return noises.AmplitudeDamping(gamma)
    172 
    173 

~/anaconda3/envs/Braket/lib/python3.7/site-packages/braket/circuits/noises.py in __init__(self, gamma)
    581             gamma=gamma,
    582             qubit_count=None,
--> 583             ascii_symbols=["AD({:.2g})".format(gamma)],
    584         )
    585 

TypeError: unsupported format string passed to numpy.ndarray.__format__

Reorder community demos

It was suggested to reorder the demos on the community page in reverse chronological order, so that the most recent demos are highlighted. 🕐

[DEMO] Iris Classification using Qnode, Keras Optimizer and Loss function.

General information

Name
Hemant Gahankari

Affiliation (optional)
None

Demo information

Title
Iris Classification using Qnode and Keras Optimizer and Loss function.

Abstract
This example is created to explain how to create a quantum function and train a quantum function using keras optimizer directly. i.e. not using Keras Layer.

Objective is to train a quantum function to predict classes of Iris dataset.

Relevant links
https://colab.research.google.com/drive/17Qri3jUBpjjkhmO6ZZZNXwm511svSVPw?usp=sharing

Provide a download link for sine.txt in the function fitting tutorial

The quantum function fitting tutorial is not executed by Sphinx-Gallery, as it takes too long to execute. Unfortunately, this caused an issue with loaded data to be overlooked; the referenced file data/sine.txt not only has no download links, but as far as I can tell, isn't even present in the repository.

A clear download link should be added to the tutorial.

Related issue: #60

[DEMO] Quantum-Enhanced Transformer

General information

Name
Riccardo Di Sipio

Affiliation (optional)
Ideal.com (O5 Systems)


Demo information

Title
A Quantum-Enhanced Transformer

Abstract
The Transformer neural network architecture revolutionized the analysis of text. Here we show an example of a Transformer with quantum-enhanced multi-headed attention. In the quantumized version, dense layers are replaced by simple Variational Quantum Circuits. An implementation based on PennyLane and TensorFlow-2.x illustrates the basic concept.

Relevant links
Code: https://github.com/rdisipio/qtransformer
Blog: https://towardsdatascience.com/toward-a-quantum-transformer-a51566ed42c2

Make as many non-executable demos executable as possible

Currently, several demos are not executed when the website is deployed, for a couple of reasons:

  • Execution time: qgrnn, qonn, quantum_neural_net

  • Requires access to hardware/external services or APIs: pytorch_noise, quantum_volume

  • Tricky to install dependency on CircleCI: qsim_beyond_classical

We should attempt to make as many executable as possible, as we will otherwise be unaware if they no longer work with the latest PL version. Of the above, it is likely that pytorch_noise and quantum_neural_net no longer work.

For the ones with currently a slow execution time, perhaps porting them to use backpropagation (rather than parameter-shift) will be beneficial, and allow them to be executable.

Note: for quantum_neural_net, it should be updated to use strawberryfields.tf and the CVQNN layers (rather than hand-coding the layers manually)

Todo:

  • Check if pytorch_noise still works
  • Check if quantum_neural_net still works
  • Install qsim on CircleCI
  • See if the slow demos can be made faster using backprop etc.

[DEMO] Quantum Hybird Circuits for TRPO/PPO based on policy RL

General information

Name
Abhilash Majumder (abhilash1910-Github).

Affiliation (optional)
MSCI Inc.

Twitter (optional)
abhilash1396

Image (optional)
Suggested image to use when advertising your demo on Twitter; can be provided via hyperlink or by copy/pasting directly in GitHub.
https://user-images.githubusercontent.com/30946547/127306307-492184f3-a01b-46e4-a42a-6d10e320bb38.png

Demo information

Title
Quantum PPO/TRPO- Taking gradients through experiments: LSTMs and memory proximal policy optimization for black-box quantum control
Abstract
Reinforcement Learning as quantum control leverages QHC for creating optimizations for on policy networks for Deep RL. Policy-gradient-based reinforcement learning (RL) algorithms are well suited for optimizing the variational parameters of QAOA in a noise-robust fashion, opening up the way for developing RL techniques for continuous quantum control. This is advantageous to help mitigate and monitor the potentially unknown sources of errors in modern quantum simulators.This demo aims to provide an implementation of PPO on policy algorithm with QHC (hybrid circuits) for continuous control.

Relevant links
https://colab.research.google.com/drive/1wkZpEpOuZHUdI-vRxQAiDlQD455diFSs?usp=sharing

[DEMO] Layerwise learning for quantum neural networks

General information

Name
Felipe Oyarce Andrade.


Demo information

Title
Layerwise learning for quantum neural networks.

Abstract
In this project we’ve implemented a strategy presented by Skolik et al., 2020 for effectively train quantum neural networks. In layerwise learning the strategy is to gradually increase the number of parameters by adding a few layers and training them while freezing the parameters of previous layers already trained.
An easy way for understanding this technique is to think that we’re dividing the problem into smaller circuits to successfully avoid to fall into Barren Plateaus. Here, we provide a proof-of-concept for the implementation of this technique in Pennylane’s Pytorch interface for binary classification in the MNIST dataset.

Relevant links
https://arxiv.org/abs/2006.14904
https://github.com/felipeoyarce/layerwise-learning
https://nbviewer.jupyter.org/github/felipeoyarce/layerwise-learning/blob/master/layerwise_learning/Layerwise%20learning%20-%20Image%20Classification.ipynb

[DEMO] Variational Quantum Circuits for Deep Reinforcement Learning

General information

Name
Samuel Yen-Chi Chen

Affiliation
Brookhaven National Laboratory, Upton NY


Demo information

Title
Variational Quantum Circuits for Deep Reinforcement Learning

Abstract
This work explores variational quantum circuits for deep reinforcement learning. Specifically, we reshape classical deep reinforcement learning algorithms like experience replay and target network into a representation of variational quantum circuits. Moreover, we use a quantum information encoding scheme to reduce the number of model parameters compared to classical neural networks. To the best of our knowledge, this work is the first proof-of-principle demonstration of variational quantum circuits to approximate the deep Q-value function for decision-making and policy-selection reinforcement learning with experience replay and target network. Besides, our variational quantum circuits can be deployed in many near-term NISQ machines.

Relevant links
We provide a GitHub repo for future studies. The paper has been accepted by IEEE Access and can be downloaded here.

@article{chen19,
title={Variational quantum circuits for deep reinforcement learning},
  author={Chen, Samuel Yen-Chi and Yang, Chao-Han Huck and Qi, Jun and Chen, Pin-Yu and Ma, Xiaoli and Goan, Hsi-Sheng},
  journal={IEEE Access},
  year={2020},
  volume={8},
  pages={141007-141024},
  publisher={IEEE}
}

Provide a download link for parity.txt data in the variational classifier tutorial

See PennyLaneAI/pennylane#536 for context.

When a script requires a a file to be loaded (e.g., loading data), we should make it clear to the user where to acquire this file, where to put it locally, and how to modify the script to find this file.

Currently, the variational classifier does not do a good job of this (causing execution to fail for downloaded scripts). On the other hand, the transfer learning and embeddings tutorials explicitly provide download links to necessary files.

[DEMO] Quantum Machine Learning Model Predictor for Continuous Variable

General information

Name
Roberth Saénz Pérez Alvarado.

Affiliation (optional)
UNIVERSIDAD NACIONAL DE INGENIERIA ( UNI , LIMA-PERU).

Demo information

Title
Quantum Machine Learning Model Predictor for Continuous Variable

Abstract
According to this paper: "Predicting toxicity by quantum machine learning" (Teppei Suzuki, Michio Katouda 2020) is possible to predict continuous variables like continuous-variable quantum neural network model described in Killoran et al. (2018), using 2 qbits per feature applying encodings, variational circuits and some lineal transformations on expected values in order to predict values close to real target. I uploaded the following example from https://pennylane.ai/qml/demos/quantum_neural_net using PennyLane libraries, a short dataset which consists on 1 dimensional feature and 1 output, so that the processing does not take too much time, the algorithm showed reliable results.

Relevant links
https://github.com/roberth2018/Quantum-Machine-Learning/blob/main/Quantum_Machine_Learning_Model_Predictor_for_Continuous_Variable_.ipynb
https://pennylane.ai/qml/demos/quantum_neural_net.html
https://arxiv.org/abs/2008.07715
https://arxiv.org/abs/1806.06871

[DEMO] Hybrid quantum-classical auto encoder

General information

Name
Sophie Choe

Affiliation (optional)
Portland State University

Demo information

Title
Hybrid quantum-classical auto encoder

Abstract
Keras auto-encoder hybrid model
Encoder: 6 Keras dense layers
Decoder: 3 Pennylane qml layers

The loss function in the paper requires state vectors and probability, however Pennylane measurement module does
not support state vector retrieval (as of 9/24/2021). Hence mean squared error from Keras is used instead on the
output vectors of probability measurement. To get vectors of length 3, the cutoff dimension parameter is set to 3.

Relevant links
https://github.com/sophchoe/QML/blob/main/auto_encoder_Pennylane_Keras.ipynb
https://arxiv.org/pdf/1806.06871v1.pdf

Update key concepts content pages

Context: The Key Concepts section has been turned into a glossary, but the content is somewhat out of date.

For example:

  • Automatic differentiation page: Delete or improve the delineation to symbolic or numeric differentiation
  • Circuit ansatz: Differentiate clearly between "template", "ansatz" and "architecture"

[To be continued...]

Solution: We should update and extend the pages one-by-one.

[DEMO] Iris Classification using Amplitude Embedding and qml.qnn.KerasLayer

General information

Name
Hemant Gahankari

Affiliation (optional)
None

Demo information

Title
Iris Classification using Amplitude Embedding and qml.qnn.KerasLayer

Abstract
This example is created to explain how to pass classical data into the quantum function and convert it to quantum data. Later it shows how to create qml.qnn.KerasLayer from qnode and train it and also check model performance.

Relevant links
https://colab.research.google.com/drive/12ls_GkSD2t0hr3Mx9-qzVvSWxR3-N0WI#scrollTo=4PQTkXpv52vZ

[DEMO] Feature maps for kernel-based quantum classifiers

General information

Name
Semyon Sinchenko

Affiliation (optional)
Moscow State University


Demo information

Title
Feature maps for kernel-based quantum classifiers

Abstract
In this tutorial we implement a few examples of feature maps for kernel based quantum machine learning. We'll see how quantum feature maps could make linear unseparable data separable after applying a kernel and measuring observable. We will follow an https://arxiv.org/abs/1906.10467 article and also implement with PennyLane all the kernel-functions from this article.

Relevant links
https://github.com/SemyonSinchenko/PennylaneQuantumFeatureMaps
https://arxiv.org/abs/1906.10467

Investigate dependency mismatch of antlr4-python3-runtime

A user in the forum has come across what appears to be a conflict in the dependencies for the demos. Specifically, some demos require antlr4-python3-runtime==4.8, whereas others (e.g., tutorial_vqe_parallel) require v4.7.2. The user has tried running in an environment built straight from the requirements.txt file and has still run into trouble. As per the error message below, this appears to be related to pyquil, but we'll need to investigate further.

*********************
Running Python file: x-pytorch_noise
*********************

2020-11-21 02:41:09.978901: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2020-11-21 02:41:09.978989: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
  File "x-pytorch_noise.py", line 60, in <module>
    dev = qml.device("forest.qvm", device="2q", noisy=True)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/pennylane/__init__.py", line 187, in device
    plugin_device_class = plugin_devices[name].load()
  File "/workspace/.pip-modules/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2446, in load
    self.require(*args, **kwargs)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2469, in require
    items = working_set.resolve(reqs, env, installer, extras=self.extras)
  File "/workspace/.pip-modules/lib/python3.7/site-packages/pkg_resources/__init__.py", line 775, in resolve
    raise VersionConflict(dist, req).with_context(dependent_req)
pkg_resources.ContextualVersionConflict: (antlr4-python3-runtime 4.8 (/workspace/.pip-modules/lib/python3.7/site-packages), Requirement.parse('antlr4-python3-runtime<4.8,>=4.7.2'), {'pyquil'})

[DEMO] EVA (Exponential Value Approximation) algorithm

General information

Name
Guillermo Alonso-Linaje.

Affiliation (optional)
Universidad de Valladolid.

Twitter (optional)
@KetpuntoG

Image (optional)
https://github.com/KetpuntoG/EVA_Tutorial/blob/main/eva.png


Demo information

Title
EVA (Exponential Value Approximation) algorithm.

Abstract
VQE is currently one of the most widely used algorithms for optimizing problems using quantum computers. A necessary step in this algorithm is calculating the expectation value given a state, which is calculated by decomposing the Hamiltonian into Pauli operators and obtaining this value for each of them. In this work, we have designed an algorithm capable of figuring this value using a single circuit. A time cost study has been carried out, and it has been found that in certain more complex Hamiltonians, it is possible to obtain a good performance over the current methods.

Relevant links
Demo: https://github.com/KetpuntoG/EVA_Tutorial/blob/main/EVA.ipynb
Paper: https://arxiv.org/abs/2106.08731

Update README to reflect updated QML page

The current instructions for adding a contributed demo include:

image

Following the re-design, demonstrations.rst is no longer the correct place to put this information, and instead it goes in demos_basic.rst (or research, or community). Instructions should be updated to reflect this, and provide guidance for selecting the appropriate category.

Add a banner on the home page, with 'Start here' links

With no organization on the 'Demos' page, it is difficult for new users to identify exactly where they should start. There are a couple of potential solutions.

  • Add a banner on the home page, with 'Start here' links. Something similar to the following?

    image

  • Start organizing the demos?

  • Keep the demos as is, but 'Feature' some demos at the top of the page (similar to the home page carousel).

State preparation tutorial is too slow with the latest pyQuil 2.17

With pyQuil 2.10 and below, the state preparation tutorial (using the pyQVM) takes ~ 1.5 minutes to execute. However, with pyQuil 2.11+, this same tutorial now takes hours to finish executing.

At the same time, the latest version of PennyLane-Forest requires PyQuil 2.16 and above for QPU compatability. As a result, the tutorial should be modified to use the QVM, or another simulator.

Add bylines to demos

Add bylines to the demos with the following information:

  • original publication date
  • date of most recent update (if any)
  • author ("PennyLane team", or external contributor where applicable)
  • PennyLane version

For the PL version, must decide if we should add version of most recent build, original version, or a range of versions for which this demo will work (like ">=0.11" or something).

[DEMO] Meta-Variational Quantum Eigensolver

General information

Name
Nahum Sá

Affiliation (optional)
Centro Brasileiro de Pesquisas Físicas


Demo information

Title
Meta-Variational Quantum Eigensolver

Abstract
In this tutorial I follow Meta-VQE paper. The Meta-VQE algorithm is a variational quantum algorithm that is suited for NISQ devices and encodes parameters of a Hamiltonian into a variational ansatz which we can obtain good estimations of the ground state of the Hamiltonian by changing only those encoded parameters.

Relevant links
Demo - https://github.com/nahumsa/pennylane-notebooks/blob/main/Meta-VQE%20Pennylane.ipynb
Original Paper - https://arxiv.org/abs/2009.13545
Alba Cervera-Lierta QHACK21 repository - https://github.com/AlbaCL/qhack21/blob/main/Meta-VQE.ipynb

[DEMO] QCNN for Speech Recognition and Neural Saliency Analysis

General information

Name
C.-H. Huck Yang

Affiliation
Georgia Institute of Technology, Atlanta, GA


Demo information

Title
QCNN for Speech Commands Recognition.

Abstract
We provide a hybrid model training with larger-scale acoustic features 3,000 to 10,000 with quantum convolution networks with Random layer, which still provide insightful convolution features without the encoding time cost from the CPU simulation and on the request queueing time from QPU. We further provide classical activation mapping, a neural saliency analysis, on the well-trained neural models (QCNN Self-Attention v.s. CNN Self-Attention) to justify that the QCNN Self-attention model did learn meaningful representations. An additional Connectionist Temporal Classification (CTC) loss on character recognition is also provided for continuous speech recognition.

Relevant links

We provide a GitHub repo and Colab for future studies. A related preprint has been released and will appear in IEEE International Conference on Acoustics, Speech, & Signal Processing (ICASSP) 2021.

@article{yang2020decentralizing,
  title={Decentralizing Feature Extraction with Quantum Convolutional Neural Network for Automatic Speech Recognition},
  author={Yang, Chao-Han Huck and Qi, Jun and Chen, Samuel Yen-Chi and Chen, Pin-Yu and Siniscalchi, Sabato Marco and Ma, Xiaoli and Lee, Chin-Hui},
  journal={arXiv preprint arXiv:2010.13309},
  year={2020}
}

[FEATURE] Update VQE with Parallel QPUs demo to use Dask explicitly and remove qml.ExpvalCost

As part of a demo update (#330), we came across a demo which requires a little more work in order to update.

In this demo, we would like to replace the instances of qml.ExpvalCost with qml.expval(H) while preserving the ability to compute each term in the hamiltonian in parallel using Dask.

Updating this demo would involve making the necessary changes to the code to remove the instance of qml.ExpvalCost, preserve the multiprocessing via Dask, and include a paragraph explaining to users how they can make use of Dask parallelization to optimize their projects.

Replace usage of `qml.init` with `Template.shape`

The init module has been deprecated, and the recommend approach for generating initial weights is to use the Template.shape method:

>>> from pennylane.templates import StronglyEntanglingLayers
>>> qml.init.strong_ent_layers_normal(n_layers=3, n_wires=2) # deprecated
>>> np.random.random(StronglyEntanglingLayers.shape(n_layers=3, n_wires=2))  # new approach

We should update demos using the init module to instead use the shape method.

[DEMO] Linear Regression using Angle Embedding and Single Qubit

General information

Name
Hemant Gahankari

Affiliation (optional)
None

Demo information

Title
Linear Regression using Angle Embedding and Single Qubit with qml.qnn.KerasLayer

Abstract
This example is created to explain how to create a Hybrid Neural Network (Mix of Classical and Quantum Layer) and train it and get predictions from it.

The data set consists of a temperature readings in Degree Centigrade and corresponding Fahrenheit.

Objective is to train a neural network to predict Fahrenheit value given Deg Centigrade value.

Relevant links
https://colab.research.google.com/drive/1ABVtBjwcGNNIfmiwEXRdFdZ47K1vZ978?usp=sharing

[BUG] Pennylane Tutorial Search Bar


Issue description

Description of the issue - Can not get the full search bar on safari.

searchBar

  • Expected behavior: Get the full search bar.

  • Actual behavior: 75% search bar...

  • System information: Nohting.

Additional information

Don not know if it is only an issue on macOS Big Sur (11.4 Ver.) or not.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.