Coder Social home page Coder Social logo

qhack2021's Introduction

Note: This repository has been frozen while we consider the submissions for the QHack Open Hackathon. We hope you enjoyed the event!

image

Welcome to QHack, the quantum machine learning hackathon! We're thrilled to have the opportunity to meet and work with such a large and diverse group of participants, and we look forward to interacting with you all during the event.

This year's event consists of three main components:

The up-to-date event schedule can be found here.

Power Ups and Prizes

QHack has some amazing goodies and prizes available to be won, courtesy of our sponsors.

Credits for AWS

  • Earn $250 in AWS credits: At the conclusion of our Feb 19 live stream, the top 80 teams on the scoreboard will receive $250 credits to help them build their Open Hackathon solutions on AWS. Teams can apply credits to any AWS service, including Amazon Braket where they can showcase their ideas on Rigetti, IonQ, and D-Wave hardware or with high-performance simulators in the cloud.

  • Earn $4000 in AWS credits: Teams who open an issue by Feb 24 on this GitHub repository with a description of their (in progress) Open Hackathon project are eligible for $4000 in additional AWS credits to use towards their hackathon project.

Access Sandbox's Floq Simulator

  • Alpha access to TPU-based quantum simulators: The top 50 teams in the challenge will each receive an API key for the alpha of Sandbox@Alphabet's Floq API. Discover more details about Floq@QHack here.

  • Floq Cash Prize: The team with the best usage of Floq by the end of the Open Hackathon will be eligible to receive a $2500 cash prize. See here for more details.

Grand Prize

  • Win a summer internship at CERN: The top overall team (judged by QML Challenge scoreboard ranking and Open Hackathon project) will receive up to 3 summer internship positions at CERN.

Please read our terms and conditions for official eligibility and evaluation criteria. Entry void in Quebec.

Participants in the event agree to abide by the QHack Code of Conduct.

qhack2021's People

Contributors

co9olguy avatar doctorperceptron avatar josh146 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qhack2021's Issues

[Power Up] Quantum Portfolio Analysis

Team Name:

QLords

Project Description:

As a team of undergraduates that are relatively new to quantum machine learning we are going to explore different methods of applying QML to the mean variance portfolio optimisation problem for N assets. The main focus will be to implement a qGAN. There are papers on how classical GANs can be applied to portfolio optimisation (https://arxiv.org/pdf/1909.10578.pdf) and papers on the implementation of qGANs. The main paper we will be using to implement this is - (https://www.nature.com/articles/s41534-019-0223-2.pdf).

The method is as follows. Firstly we extract the data required from the internet and pass it into computable sequences. We have a quantum generator that takes a time series as input, and outputs the next period for the time series. We train this using a classical CNN discriminator. This takes in a sequence and returns if it is generated or real. We then can use the quantum generator and classical CNN to train each other. Once they have been trained the generator will be able to produce the next period. This can then be analysed to maximise gain and minimise risk.

Due to the limits of current quantum computation, we will be exploring the method we would use given larger computers. This may mean that the results are not significant when compared to the use of a classical GAN, however we think that this is a more informative project for how real data can be used. We are going to do this by passing large datasets through, as you would if you were using a large classical NN for time series analysis. This means that although we may not have the time to train the model enough (or with enough datasets) such that the results are comparable, we can see how the results begin to improve, and theorise the extrapolation given more time and computational power.

Time permitting, besides qGANs we are also going to look at two other methods of portfolio analysis. The first will be QAOA to find the optimal solution, creating a hamiltonian for the problem.

We are further going to look into using quantum inspired Tensor Networks - this involves mapping out the optimisation problem using the historical data into a QUBO and then transform it into an MPS, then run DMRG into it. i.e Make an MPS and Hamiltonian for the problem, find the minimal value of the Hamiltonian with DMRG that is simultaneously solving the MPS from two ends keeping one state constant, one iteration at a time. This method can be fed into the qGAN and trained alongside the qGAN.

Source code:

Initial planning and some rough implementations:
https://github.com/calumholker/quantum-portfolio-optimisation

Resource Estimate:

We will require the maximum number of qubits in order to pass in as large datasets as possible. In order to obtain the best results we can, the maximum amount of data will need to be processed in the minimum amount of time. We are therefore using the FLOQ simulator, as this takes a fraction of the time to compute our generator circuit. Once testing is complete we will train our qGAN on an Amazon Braket machine. As for our project the longer we can spend computing the better the results will be, we will use all of our available budget in training the model with the large datasets.

In terms of the datasets, we have extracted 22350 sequences of length 40 from historical data for an array of different stocks. Each of these will be split into a 32 length and 8 length, the 32 being used to generate and the 8 being used to train the discriminator. Each of these computations takes on the order of half an hour on a local simulator, compared to 45 seconds on FLOQ. We will obviously not be able to run all 22350 on the machine, as several training loops are required for each. Instead we are going to take on the order of 100 sequences for one stock and show how this model can be used on this stock. If we complete this we will pass more data to see if it improves.

[Power Up] Quantum enhanced convolutional filter

Team Name:

CCH

Project Description:

The emerging field of hybrid quantum-classical algorithms joins CPUs and QPUs to speed-up/improve specific calculations within a classical algorithm. This allows for shorter quantum executions that are less susceptible to the cumulative effects of noise and that run well on today’s devices. This is why we intend to explore the performance of a hybrid convolutional neural network model that incorporates a trainable quantum layer, effectively replacing a convolutional filter, in both quantum simulators and QPU.

Our team proposes to design a trainable quantum convolutional filter in a quantum-classical hybrid neural network, appealing for the NISQ era, inspired by these papers: Hybrid quantum-classical Convolutional Neural Networks [1] and Quanvolutional Neural Networks [2] , but generalizing these previous works to use cloud based QPU.

Here is a list of the expected outcomes/ questions to address of this project:

  • Complete benchmarking of a quantum convolutional filter (Encoding of data + variational ansatz) embedded in a classical neural network, in the context of an image classification task with the MNIST dataset.

  • Example of complete workflow for training a quantum-classical CNN interfacing Pennylane with TensorFlow/Pytorch for automatic differentiation of the quantum and classical layers, and amazon braket for running the workflow on a QPU.

  • With the current noise level in cloud-based QPU, what size/depth of the parametrized quantum circuits is expressive enough without performance being buried under noisy conditions. Can we achieve a significant advantage (in terms of evaluation metrics for a fixed number of quantum vs classical parameters/weights) with today’s QPU?

  • Visual exploration of convolved features ( output of filters) with both quantum and classical convolutional filters.

Source code:

https://github.com/KetpuntoG/QFilters/blob/main/Qfilter4_enhanced%20(1).ipynb

Resource Estimate:

There are a few bottlenecks in the quantum classical hybrid models to explore (number of learnable parameters in ansatz related to depth of quantum circuits, number of convolutions will increase as image size increases as well). The quantum filters will need qubits registers of size in the range of 9 to 30 qubits (equivalent to NxN kernel window), 3x3 and 5x5 are typical sizes in CNN. But mainly, there will be shallow quantum circuits executed on both simulator and hardware backends (LocalSimulator and Rigetti QPU should be good enough), with a reasonable number of shots, many quantum computations will be performed during training if the number of epochs and dataset size is large. For performing the multiple translations of the kernel around the image, we expect to parallelize this workload on Amazon bracket during the training phase, to speed it up. Another aspect is to keep the classical layers not too deep to allow for efficient classical training. We also aim to run multiple benchmarks such as exploring the trade-off of number of run epochs and accuracy, the complexity/expressive power of the ansatz and the accuracy, number of quantum vs classical parameters and a time complexity benchmark of the hybrid training loop time.

References

[1] https://arxiv.org/abs/1911.02998
[2] https://arxiv.org/abs/1904.04767

Qountry Songs

Team Name:

QUANTIFY

Project Description:

The diagrammatic approach to quantum computations pioneered in [1,2] has been extended to quantum circuit compilation and optimisation [3]. The latter has been successfully applied for QNLP using NISQ machines [4, 7] instead of using Grover-like, QRAM-based approaches [5]. It has been shown that QAOA methods are approximators of universal computations such as the ones expressed in the ZX calculus. There exists a similarity between the QAOA exponentiated ZZ gates and the parameterised Ry gates and the trained circuits from [4] (“language diagrams into quantum circuits with phase-gates and CNOT-gates”). QNLP as a problem of the closest vector [4, 5] shares some similarities with skip-grams and the word2vec model.
We investigate the applicability QNLP using QAOA (implemented using PennyLane) to verify the theory from [4]. We use a somewhat reverse approach to the one from [4], instead of starting from language diagrams, we start from skip-grams and train context from using windows of two words extracted from sentences.
We generate country songs using trained models. Country songs are good candidates, because these include repeating somewhat straightforward concepts: the corpus includes a lot of redundancy and and many contexts in which the words are appearing.
Our trained model will reflect the original language diagram of the corpus we started from. The semantics is embedded in the trained QAOA weights:: the strengths of the ZZ and Ry gates encode the grammatical relations.
The feasibility of our QNLP is tested, for the moment, using [8]. The model can predict with an accuracy of 65% a corpus of 31 words after 200 training rounds (10 minutes) and using only 28 variables. A corpus of 84 words (61 unique) achieves 45% accuracy after 60 min. training. We use Google Colab. Below is a sample song from the latter experimentation (we added the punctuation).

She kicks

Work bartender knows week
Same mind … she kicks. Don’t,
Same work end of
Dive name, but I … My …

The bartender knows week drink,
My she kick. Don't mind...
Dive, but I, my name
The/she kicks … Don’t
[1] Abramsky S, Coecke B. A categorical semantics of quantum protocols. In Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004. 2004 Jul 17 (pp. 415-425). IEEE.
[2] Abramsky S. Petri nets, discrete physics, and distributed quantum computation. InConcurrency, Graphs and Models 2008 (pp. 527-543). Springer, Berlin, Heidelberg.
[3] van de Wetering J. ZX-calculus for the working quantum computer scientist. arXiv preprint arXiv:2012.13966. 2020 Dec 27.
[4] Coecke B, de Felice G, Meichanetzidis K, Toumi A. Foundations for Near-Term Quantum Natural Language Processing. arXiv preprint arXiv:2012.03755. 2020 Dec 7.
[5] Zeng W, Coecke B. Quantum algorithms for compositional natural language processing. arXiv preprint arXiv:1608.01406. 2016 Aug 4.
[6] Lloyd S. Quantum approximate optimization is computationally universal. arXiv preprint arXiv:1812.11075. 2018 Dec 28.
[7] QNLP 2019 videos on youtube, e.g. https://www.youtube.com/watch?v=Osu2SPtCvfU
[8] https://www.azlyrics.com/lyrics/jonpardi/heartachemedication.html

Presentation:

Source code:

https://github.com/oumjunior/Qountry-songs

[ENTRY] Making Quantum Machine Learning easier by building a PennyLane wrapper.

Team Name:

Cabriella

Project Description:

We are making a custom made python library which integrates PennyLane with python.

The aim of this library is to make quantum machine learning easier to do by removing the need to encode hardware such as circuit, device, QNode etc. Our Wrapper automatically customizes the circuit according to the input, hence removing the need to understand the physics of gates, qubit interaction etc. We do this from the motivation that why classical machine learning get to not think about hardware, whereas quantum machine learners do.

The aim of this wrapper is to make qml more accessible to the classical ml users, paving the way for people to easily use qml in their projects.

Presentation:

A hyperlink to an explanatory presentation of your team’s hackathon project in a non-technical form (e.g., video, blog post, jupyter notebook, website, slideshow, etc.).

Source code:

A hyperlink to the final source code for your team's hackathon project (e.g., a GitHub repo).
https://github.com/SaadNaeem96/QHack-2021-by-XanaduAI/tree/main/Hackathon

[Power Up] Quantum Recommendation Systems

Team Name:

QCal

Project Description:

Goal: Implement Quantum Recommendation Systems

Quamtum Recommendation Systems (QRS) is the first algorithm for recommendation systems with polylogarithmic runtime with respect to the preference matrix dimension. Although it inspired the birth of a classical algorithm of similar complexity, the quantum algorithm still serves as an example for use of quantum machine learning algorithms in real-world problems.

Provided an m x n preference matrix assumed to have a good rank-k approximation, QRS utilizes a quantum routine called phase estimation to sample from the subspace spanned by singular vectors corresponding to major singular values. With high probability, this strategy can recommend relevant items to a user given his/her partial preference.

Source code:

https://github.com/MyEntangled/Quantum-Recommendation-System

Resource Estimate:

We would like to experiment the algorithm with different configurations, for example, with physical backends provided by AWS Braket, e.g. superconducting qubits, ion traps, and quantum annealing. We also aim to expand the project to different encoding schemes and optimization techniques, which Pennylane-Braket plugin is of great help.

[Power Up] Analysis, Prediction and Evaluation of Covid-19 Datasets using Quanvolutional Neural Network

Team Name:

QTechnocrats

Project Description:

Analysis, Prediction and Evaluation of Covid-19 Datasets using Quanvolutional Neural Network

Dataset used -

We have used this datset from Kaggle which contains 250 training and 65 testing images for our model.

Our Approach to the classifier Model-

Images given in the dataset are real life chest x-rays and are not previouly modified. So all have different dimensions. So we reduced the all the image size to a specific dimension. It would be more convenient to fix the image size to 256x256 but due to limitations of computational resources we have reduced it to 28x28 size.

Applying Quanvolutional Layer-

We have extended the single layer approach of Quanvolutional Neural Network from here to multiple layers, to be exact 4 layers in our model.
Initially Each images has the dimension of (28x28x1) which is fed to the first Quanvolutional layer and converted to (14x14x4). The 2nd Layer converts it to (7x7x16), 3rd layer to (3x3x64) and finally the 4th and last layer converts each to a (1x1x256) dimensional data matrix.
alt text

Classifier Model-

After the Quanvolutional layers, we have the classifier model. The classifier model consists of two subclassifiers each of which is a binary classifier. We dente those two by 'Model-1' and 'Model-2'.

Model-1 classifies between two classes - 'Normal Person' and 'Covid19/Viral Pnemonia'.
Model-2 classifies between two classes - 'Covid10' and 'Viral Pneumonia'.

alt text

Prediction -

While Predicting, we first give input to the Model-1. If it predicts as Normal person, then it is the final prediction assigned to the input. If not, then we give the same input to Model-2 and it finally predicts whether the chest x-ray is Covid19 patient or Viral Pneumonia patient.

Reference -

  1. Saad Albawi, Tareq Abed Mohammed, and Saad Al-Zawi, "Understanding of a convolutional neural network, In 2017 International Conference on Engineering and Technology (ICET), pages 1-6. IEEE, 2017.
  2. Maxwell Henderson, Samriddhi Shakya, Shashindra Pradhan, and Tristan Cook.
    "Quanvolutional neural networks: powering image recognition with quantum circuits",
    Quantum Machine Intelligence, 2(1):19, 2020.
    3.Junhua Liu, Kwan Hui Lim, Kristin L Wood, Wei Huang, Chu Guo, and He-Liang Huang, "Hybrid quantum-classical convolutional neural networks",
    arXiv preprint arXiv:1911.02998, 2019.
  3. Quanvolutional Neural Networks.
    https://pennylane.ai/qml/demos/tutorial_quanvolution.html.

Source code:

Here is the Presentation of Overall Project.
This ReadMe fille consists of the total description of project.
Here you can access the source code

Resource Estimate:

For training of each images,
No. of times circuits are called =
(shots assigned to the device * number of training images * number of iteration)

Now if we set shots=1024, training images = 250 and iteration=50,
we will be calling a Quantum circuit 12800000 times for each model training.
we have two models to train. So total 25600000 times circuit will be called.

Also better accuracy will be achieved with larger dataset, which demands more computational resources.
Although here the parameters of Quanvolutional layers are not trained. If those has to be trained, that will add more circuit calls.

[Power Up] Maze Runners

Team Name:

Giglampshare

Project Description:

There are various proposals to apply quantum computing to reinforcement learning. This experiment serves as a proof-of-concept demonstration, testing the power of quantum circuits in decision-making by interacting with the environment. Two aspects will be studied:
(1) To compare the performance between classical and quantum machine learning in reinforcement learning tasks, a Quantum Variational Circuit (QVC) is tested against classical machine learning methods (SARSA, Q-learning, etc.).
(2) To investigate the potential improvement that could be gain from an ensemble of quantum learners.

The experiment will start by testing with a simple (8x8) maze-solving task, aiming to show that a bunch of weak quantum maze runners could increase the chance of surviving the maze. Possible extensions to the project would be implementing quantum energy-based models, or comparing quantum and classical models in various reinforcement learning environments (like gym AI, etc.).

Source code:

https://github.com/wongwsvincent/Pennylane_quantum-variational-circuit_RL

Resource Estimate:

We intend to use the power-up prize to further investigate the algorithms and test the scalability of the algorithms.

The initial testing will be carried out with local simulators on Amazon Braket. Therefore, we only need to spend on the final test that investigates the impact of noise error from quantum devices. IonQ Q11 is chosen here for the final testing.

IonQ Q11 costs:
$0.30 per task + $0.01 per shot
And we will request O(100) shots per task, and ~2 tasks on average are needed to perform one epoch of learning, which gives us ~$2 per epoch. Each run takes a maximum of 50 epochs, so each run costs ~$100. A total of 20 runs will be run for the study, which gives us ~$2000.

[ENTRY] GMODuck: Using a genetic algorithm to build and optimize QML circuits

Team Name:

Gattaca

Project Description:

A genetically modified yellow quantum 🦆 who is competing in QHack 2021 Open Hackathon. This 🦆 is capable of building QML circuits using genetic programming without having any idea why his model works but he assures it's the best.

GMO🦆 creates the best QML circuit to classify a simple dataset of blue and orange points in a 2D grid:

  1. Create a genome that translates into a quantum circuit - different genes, encode different gates/templates
  2. Randomly generate agents (agents are quantum circuits)
  3. Train all agents and use area under ROC as fitness
  4. Do "natural selection", cross between agents to create offspring and randomly mutate agents
  5. Repeat for some generations until 🦆 has a satisfying classifier

Presentation:

Jupyter qhackbook

Source code:

https://github.com/diogodebastos/GMODuck

[ENTRY] DeNovo genome assembly using adiabatic quantum computing

Team Name:

RQC_team

Project Description:

DeNovo genome assembly is one of the most computationally expensive tasks in the area of bioinformatics.
The problem requires reconstructing overlapping nucleotide sequences called reads into a single longest chain.
The problem can be reformulated in the QUBO form, which aligns well with computational capabilities of adiabatic quantum computing, such as D-Wave available through AWS Braket.
We show the potential of reconstructing short synthetic sequences using pure quantum mode as well as an example of reconstructing real genome of Phi-X174 bacteriophage in hybrid mode, which proves the feasibility of quantum approach to selected problem.

Presentation:

https://github.com/USM-F/quantum_genome/blob/main/Presentation.pdf

Source code:

https://github.com/USM-F/quantum_genome

[ENTRY] QAOA Optimized Maxcut Problem Used in Device Tracking — Examples in Vehicle Connection and 5G Area

Team Name:

Taiwanese High School Student

Project Description:

Our task is composed of 4 small tasks: sensor allocation, vehicle positioning , data synchronization and Simulate vehicle movement.

  1. Sensor distribution: Use MAXCUT to allocate sensors, and achieve an approximate solution to the MAXCUT problem through the quantum approximate optimization algorithm (QAOA).

  2. Car location: Use the sensors that have been allocated (subtask 1) to locate the vehicle. This method is equivalent to the minimization of the quadratic function.

  3. Data synchronization: Define the route from the server to many locations and back to the server. We use the QAOQ algorithm to obtain its Ising Hamiltonian eigenstates.

  4. Add a movement instruction to the function of the yellow dot (car)

Presentation:

https://1drv.ms/p/s!AlWkfUhLa3QogiVw0Kk-PjlrlUWv
image

Source code:

https://github.com/leo07010/Qhack-Simulate-5G-Internet-of-Vehicles-with-QAOA

image

Resource Estimate:
Since we do not have Floq to provide free simulator access and QISKIT's high-level access to the county, we need to spend resources to test the final results on AWS real quantum equipment (Rigetti Aspen-8). We will run eight qubit circuits in parallel on directly connected qubits to achieve the maximum simulation value.

250 USD will be used for the existing optimized QAQA algorithm ,cand hange the behavior of task 4 to quantum random walk.

500 USD is used to build new car models with different parameters, such as judging ambulances and trains .

1000 USD is used to expand 2-D to 3-D space, because of the addition of a Z axis, the simulation time is at least 3 times longer.

2000 USD dollars for extracting images for 10 seconds, extracting one film in 10 seconds 760*10=7600, a total of 5 pictures totaling 38,000 photos .

The remaining 250 USD is used as emergency reserve.

###Video Link
https://drive.google.com/drive/u/0/folders/1NbkzeKB5iUIsAeYxUSPQJf8GZsLx1eOJ

[Power Up] Towards Quantum Self-Improvement

Team Name:

Finq

Project Description:

Self-improvement describes an agent that can improve its own performance recursively. Inspired by this concept, we propose to examine an iterative setup, quantum self-improvement (QSI), which consists of (1) optimizing gate fidelity of a noisy quantum computer (NQC) with the variational quantum gate optimization algorithm (VQGO, https://arxiv.org/abs/1810.12745) running on the NQC itself, and (2) replacing the previous NQC gate used in VQGO with the optimized one. By repeating step (1) and (2), we may enhance both the hardware and the algorithm. Here, we demonstrated the feasibility of QSI by optimizing the CNOT gate. Our goal is to validate QSI on a physical NQC.

Meanwhile, we noticed that the originally proposed VQGO is not compatible with NQC due to erroneous random state sampling. Therefore, we improved VQGO’s noise robustness by updating the reference state with a few-shot estimation of the prepared one, see draft details here.

Source code:

https://github.com/Shangjie-Guo/Quantum-Self-Improvement

Resource Estimate:

As Floq provides free simulator access, we only need to spend resources on testing the final result on the real quantum device (Regitti Aspen-9). We will run our two-qubit circuits parallelly on directly connected qubits to do 15 Experiments per Run. For our current setting, we estimate:

Optimization steps / Run = 10-100
Cost function evaluations / Optimization step = 5
State estimations / Cost function evaluation = 10-30
Shots / State estimation = 6 * (10-100)

That gives us 10^4 - 10^7 shots per run, and we need to do more tests on how much accuracy we need to have a more accurate estimation. The reward ($4,000) will allow us to do ~10^7 shots on Aspen 9, which can cover our highest estimation. Meanwhile, we are investigating direct fidelity estimation (https://arxiv.org/abs/1104.4695) and other possible settings, which may reduce Shots / State estimation number.

Milestones:

✅ Implement VQGO with pennylane for CNOT gate
✅ Improve VQGO compatibility for noisy random state sampling
✅ Validate QSI
✅ Integrate QSI into VQGO cycles
🔜 Upgrade to direct fidelity measurement
🔜 Broadcast two qubit circuit for large scale device use, include Floq and Aspen-9
🔜 Sanity check with some different noise models on Floq
🔜 Test VQGO & QSI on Aspen-9! 💸

[Power Up] Exploration of the quantum advantage of hybrid quantum-classical neural networks

Team Name:

Qming

Project Description:

The goal of this project is to explore and demonstrate the advantage of hybrid quantum-classical neural network (QCNN) over classical models (e.g. convolutional neural networks) by conducting three experiments.

The first experiment can be considered as a warm-up for building hybrid QCNNs. We create a simple hybrid model by integrating a quantum version of binary classifier to a LeNet-like CNN model. We train this model on the MNIST dataset of handwritten digits. Since this task is a binary classification problem, we only select two types of digits (0 and 1) from the MNIST dataset.

In the second experiment, we design a quantum activation function by using parametrized quantum circuits and implement it in a hybrid QCNN model. The architecture of this model is very similar with the one in the first experiment except that the quantum activation function is integrated to the network and the quantum classifier is replaced with the classical softmax classifier (so we perform multi-class classification tasks in this case). In addition, we build another two classical CNN models by simply replacing the quantum activation function in the hybrid QCNN model with the sigmoid and tanh function respectively. After that, we conduct performance analysis of these three activation functions by training their corresponding models on the MNIST dataset and checking the model performances in terms of loss and accuracy curves. We observe from the results that parameterized quantum circuits help avoid vanishing gradient problem and can be considered as a good option for activation function in deep learning models. We also prove this advantage of parameterized quantum circuits by analytic calculation in the context of quantum mechanics which will be shown in the draft slideshow.

The last experiment is inspired by the research demo of quantum transfer learning from pennylane which implement the idea of the literature. Following the idea of dressed quantum circuits in this paper, we build a hybrid model based on the pre-trained network ResNet18 and compare its performance against the classical ResNet18 model over a set of public imaging datasets. In addition to the MNIST and Hymenoptera datasets used in the original paper, we use more public datasets including medical imaging datasets. We demonstrate the quantum advantage of hybrid models and investigate where this advantage comes from. We observe in light of quantum entanglement hybrid quantum models can obtain up to 6% increase in recognition accuracy over classical CNN models.

Source code:

draft_source_code

Note: This is a draft code for the initial entry for the AWS Power-up and will be modified and submitted by the final deadline.

Resource Estimate:

If we won the power-up prize, we would further investigate the performance of hybrid quantum-classical neural networks. We could leverage Amazon Braket’s fully managed simulators to enjoy high-performing experience on faster training, fine-tuning and testing of hybrid models. We could also perform experiments on various types of quantum computers from different quantum hardware providers. We have completed the first and second experiments mentioned in project description. For the last experiment, we have finished model performance comparison tasks on MNIST, Hymenoptera and Brain Tumor datasets and observed the quantum advantage of hybrid models in light of the quantum entanglement. We would like to extend this model evaluation task to datasets in more industrial scenarios (e.g., manufacturing, retail, finance). If we could still obtain promising results of hybrid models on those additional datasets by leveraging Amazon Braket service, it would be more convincing to conclude that hybrid quantum classical algorithms generally improve the performance of classical machine learning models for imaging classification tasks. Then we would like to summarize our work and write a research paper. In particular, we would emphasize our work is done based on Pennylane and Amazon Braket and encourage more people to used them for exploring and building quantum algorithms.

Resource estimation is given as below:

SV1 simulator:

Task1:
simulation charge: $1.125 = $0.075 / minute x 15 minutes

Task2:
simulation charge: $1.5 = $0.075 / minute x 20 minutes

Aspen-8:

Task3:
Three-qubit quantum circuit with three parameters
Number of circuit evaluations per iteration (one evaluation for forward pass and 2p evaluations for gradients where p is the number of parameters): 2 x 2 + 1 = 5
Shots charges per iteration: 5 x 1,000 shots x $0.00035 / shot = $1.75
Shots charges per epoch: $1.75 x 112 = $196
Total shot charge: $196 / epoch x 20 epochs = $3920
Task charges: 1 task x $0.30 / task = $0.30

Total charges: $1.125 + $1.5 + $3920 + $0.30 = $3922.925

[Power Up] Performance Analysis of Tensor Network Ansaetze on Tensor Network Simulators, State Vector Simulators and Quantum Hardware.

Team Name:

The Racing Scarecrow

Project Description:

This Project aims to explore and benchmark the performance of tensor network simulators in training tensor network based ansaetze for discriminative and generative tasks. In particular, quantum circuit ansaetze based on Tree Tensor Networks (TTN) and Matrix Product States (MPS) are to be explored, as described in this paper. Two different versions of each of these ansaetze are to be considered, viz, the original model and a qubit frugal model based on qubit recycling.

For evaluating performance of the above mentioned ansaetze, this dataset is expected to be used for predicting rainfall in Australia. For generative models, the bars and stripes dataset is to be used. The same training algorithm is to be run on a Tensor Network Simulator, a State Vector Simulator and on Quantum Hardware and subsequently, run times and accuracy are to be compared.

Source code:

Github Repo Link - Work in Progress

Resource Estimate:

The following ansaetze are to be explored:

  • TTN based Ansatz without qubit recycle.
  • TTN based Ansatz with qubit recycle.
  • MPS based Ansatz without qubit recycle.
  • MPS based Ansatz with qubit recycle.

The following simulations are to be performed for each of the above ansaetze:

  • At least one Tensor Network Simulator based training (Floq TPU / Braket TN1 Simulator, preferably both).
  • Training on Braket SV1.
  • Training on Braket Hardware (Aspen-9 / IonQ).

Projected future work after the hackathon:

  • Explore Tensor Network based generative models.
  • Explore more Tensor Network based ansaetze such as MERA and PEPS.
  • Explore applications to dynamic simulations of 2D spin lattices.

[ENTRY] Qountry Songs

Team Name:

QUANTIFY

Project Description:

The diagrammatic approach to quantum computations pioneered in [1,2] has been extended to quantum circuit compilation and optimisation [3]. The latter has been successfully applied for QNLP using NISQ machines [4, 7] instead of using Grover-like, QRAM-based approaches [5]. It has been shown that QAOA methods are approximators of universal computations such as the ones expressed in the ZX calculus. There exists a similarity between the QAOA exponentiated ZZ gates and the parameterised Ry gates and the trained circuits from [4] (“language diagrams into quantum circuits with phase-gates and CNOT-gates”). QNLP as a problem of the closest vector [4, 5] shares some similarities with skip-grams and the word2vec model.

We investigate the applicability QNLP using QAOA (implemented using PennyLane) to verify the theory from [4]. We use a somewhat reverse approach to the one from [4], instead of starting from language diagrams, we start from skip-grams and train context from using windows of two words extracted from sentences.

We generate country songs using trained models. Country songs are good candidates, because these include repeating somewhat straightforward concepts: the corpus includes a lot of redundancy and and many contexts in which the words are appearing.

Our trained model will reflect the original language diagram of the corpus we started from. The semantics is embedded in the trained QAOA weights:: the strengths of the ZZ and Ry gates encode the grammatical relations.
The feasibility of our QNLP is tested, for the moment, using [8]. The model can predict with an accuracy of 65% a corpus of 31 words after 200 training rounds (10 minutes) and using only 28 variables. A corpus of 84 words (61 unique) achieves 45% accuracy after 60 min. training. We use Google Colab. Below is a sample song ( song title: She kicks) from the latter experimentation (we added the punctuation).

Work bartender knows week
Same mind … she kicks. Don’t,
Same work end of
Dive name, but I … My … 
 
The bartender knows week drink,
My she kick. Don't mind...
Dive, but I, my name
The/she kicks … Don’t
  1. Abramsky S, Coecke B. A categorical semantics of quantum protocols. In Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004. 2004 Jul 17 (pp. 415-425). IEEE.
  2. Abramsky S. Petri nets, discrete physics, and distributed quantum computation. InConcurrency, Graphs and Models 2008 (pp. 527-543). Springer, Berlin, Heidelberg.
  3. van de Wetering J. ZX-calculus for the working quantum computer scientist. arXiv preprint arXiv:2012.13966. 2020 Dec 27.
  4. Coecke B, de Felice G, Meichanetzidis K, Toumi A. Foundations for Near-Term Quantum Natural Language Processing. arXiv preprint arXiv:2012.03755. 2020 Dec 7.
  5. Zeng W, Coecke B. Quantum algorithms for compositional natural language processing. arXiv preprint arXiv:1608.01406. 2016 Aug 4.
  6. Lloyd S. Quantum approximate optimization is computationally universal. arXiv preprint arXiv:1812.11075. 2018 Dec 28.
  7. QNLP 2019 videos on youtube, e.g. https://www.youtube.com/watch?v=Osu2SPtCvfU
  8. https://www.azlyrics.com/lyrics/jonpardi/heartachemedication.html

Presentation:

The project is fairly detailed (in addition, references are provided) in the project description section.

Source code:

https://github.com/oumjunior/Qountry-songs

[Power Up] Geflo

Team Name:

geflo

Project Description:

geflo is a program & cli to aid those in decentralized cognitive research. It consists of 2 main components:

  1. Model: a multi class variational classifier to categorize fNIRS brain headset data from 3 different session variants
  2. Market: saves the weights to be transferred to perform model tuning on additional private datasets, a type of transfer learning

The cli gives a user the ability to interact with models by purchasing those that are pre-trained from a market stored on the blockchain, possibly mix in pricing in the future.

With these 2 pieces: computation & parameter sharing, the hope is that this pipeline can be applied to various different use cases that require a pre-trained model of cognitive data (e.g. chemistry, etc.)

Source code:

https://github.com/deep6org/geflo

Resource Estimate:

The hypothesis is if you're analyzing a wave with multiple underlying frequencies, you want to be able to do so in many ways. Currently the project uses 3 qubits to learn on 3 energy bins and takes over 20 minutes per iteration to train on a local simulator. The team would like to be able to run such circuit on a remote device to decrease the time of training & development.

[Power Up] I. Protein Folding with Coined Szegedy Quantum Walk and II. VQSE

Team Name:

MQS

Project Description:

Project I: Folding of dipeptides with a coined Szegedy quantum walk

Prediction of the torsion angles between amino acids in a peptide/protein with quantum walks as a proof of concept for polynomial quantum advantage.

Project II: Virtual Quantum Subspace Expansion (VQSE)

Implementing a VQSE algorithm, to improve the accuracy of Variational Quantum Eigensolver (VQE) calculations for molecules

Full description of both projects: https://github.com/MQSdk/QHack_Open_Hackathon

Source code:

Project I: https://github.com/MQSdk/QHack_Open_Hackathon/blob/main/Project1_ProteinFolding/szegedy_circuit.py

Project II:

https://github.com/MQSdk/QHack_Open_Hackathon/tree/main/project_2_vqse

Jupyter Notebook: https://github.com/MQSdk/QHack_Open_Hackathon/blob/main/project_2_vqse/potential_energy_curve.ipynb

Resource Estimate:

Project I:

The authors of Qfold reserved 3 hours of usage on a IBM Q Casablanca processor, with a quantum volume of 32. They were able to run 25 so-called ‘jobs’ with β(t) = (0,0) and 20 jobs for 8 arbitrarily chosen dipeptides. Each run consisted of 8192 repetitions of the circuit and an equal number of measurements, what means that for each dipeptide they ran a total of 163840 circuits, and 204800 for β(t) = (0,0) as a baseline.

We want to test our implementation with as many possible structures we can generate and see where we hit the boundary of the available quantum hardware.

Project II:

We want to run the VQSE algorithm with bigger molecules such as LiH, Li2 and N2
It is hard for us to estimate the needed resources and based on publications N2 hits the limit of the available hardware. We would like to see how the computational demand scales with LiH and Li2. So far we have implemented the algorithm for H2, see above in the Jupyter Notebook (https://github.com/MQSdk/QHack_Open_Hackathon/blob/main/project_2_vqse/potential_energy_curve.ipynb)

[Power Up] Trainable Quantum Kernels with PennyLane

Team Name:

Notorious FUB

Project Description:

A central bottleneck of kernel-based machine learning is the choice of the kernel itself. This problem, called model selection, saw some novel approaches in the 2000s, where a couple quantities were proposed as kernel quality estimators. It was proved that a kernel who had a high polarization or alignment would have good classification and generalisation behavior. One would use this instead of an exhaustive parameter search e.g. when choosing the value of the variance for the everpresent gaussian kernels.

Recent research efforts have been made in studying the use and performance of quantum kernels in learning models. We propose to leverage the theoretical results from those early papers into studying the viability of trainable quantum kernels. We attempt this under the lens of the full-stack, providing a general purpose new module to Pennylane for further implementation of kernel methods. This includes methods e.g. for dealing with noise in the kernel matrix estimation, and for maximizing the kernel alignment, out of the shelf.

Source code:

https://github.com/thubregtsen/qhack

https://github.com/thubregtsen/qhack/tree/master/wp_TK

PennyLaneAI/pennylane#1102

Resource Estimate:

The pipeline we have defined includes three steps: optimizing a few variational parameters, training an SVM, and using the support vectors to predict the value of unseen data. We will have to then repeat this process for several datasets.

After testing data sets of different sizes and predicting the effect of the size in the number of circuit evaluations required for the three steps of training, we have concluded we need 9 datasets and 3000 circuit evaluations of 400 shots each. This amounts to roughly $420 per dataset on Rigetti's devices, so $3780 in total. Including test runs to ensure the viability of the entire pipeline this should be raised to $4000. Because of this we would greatly benefit from the PowerUp, as it would enable us to implement our pipeline in a realistic setting.

[ENTRY] Hybrid quantum-classical neural networks for self-driving cars

Team Name:

DK02

Project Description:

This project aims to test and evaluate the current capabilities of variational quantum circuits based on a hybrid ML approach, and a simplified and simulated version of a self-driving use case.

The idea is to train a classic ML model that contains multiple CNN and dense layers to predict the car's steering angle based on images. After a good model has been successfully trained, some of the weights are transferred (transfer learning) to a new model architecture, where a dense layer has been exchanged for a variational quantum circuit. Only the weights and parameters that have not been transferred are trained. Furthermore, various quantum circuits are trained and evaluated.

Presentation:

https://github.com/DenisKatic/SelfDrivingQuantumHybrid/blob/main/documents/QHack_2021_DK02.pdf

Source code:

https://github.com/DenisKatic/SelfDrivingQuantumHybrid

Investigating the effects of quantum layers in machine learning by building a custom PennyLane wrapper.

Team Name:

Cabriella

Project Description:

Here we investigate how does making a machine learning include quantum layers effect machine learning results. To do this, we employ a custom made python library which integrates PennyLane with python.

The aim of this library is to make quantum machine learning easier to do by removing the need to encode hardware such as circuit, device, QNode etc, where our library atomically customizes according to input. We do this from the motivation that why classical machine learning get to not think about hardware, whereas quantum machine learners do.

Equipped with this library, we will be able to efficiently test different types of quantum models to understand how the results are effected.

Source code:

A hyperlink to the draft source code for your team's hackathon project (e.g., a GitHub repo).
https://github.com/SaadNaeem96/QHack-2021-by-XanaduAI/tree/main/Hackathon

Resource Estimate:

The method suggested includes 1) Making the python library 2) Investigating how qunautm layers chage a classical machine learning result.

Our usage of resources will include:

  1. We have decided that the for 1) we will study 500 circuits existig kinds of circuits for each type of quantum machine learning model (CNN, ANN, Decision tree, LSTM etc)
  2. Testing the results of 2000 different types of classical machine learning result by adding variable number of qunautm machine learning layers/nodes.

We intend to use the power-up prize to further investigate the algorithms and try different approaches to increase the accuracy of our model using simulators and quantum hardware provided by AWS.

  1. A Tensor Network Simulator based training (Floq TPU / Brakcet TN1 Simulator).
  2. Training on Bracket SV1

[Power Up] DQN with Quantum Variational Circuits

Team Name:

DAC

Project Description:

The algorithm implements the pseudocode described by Reinfocement Learning With Quantum Variational Circuits of a reinforcement learning algorithm based on quantum DNN, more specifically DQN. The network architecture is the same as the one used in the paper, and the algorithm follows the classical DNN structure, based on PennyLane to build and measure our qubits.
The training is made in the BlackJack environment for simplicity, but we would like to extend that to more challenging environments.

Source code:

https://github.com/carlosamds/qhack-dac

Resource Estimate:

If awarded with the additional AWS credits we intend to make many more tests regarding the circuit architecture and try harder gym enviroments. Using the Amazon Braket services, more of those tests would be possible and they could be finished much quicker.

[Power Up] Varational Structure Quantum Generative Adversarial Networks

Team Name:

Penn Ave Fish Company

Project Description:

A prerequisite for quantum algorithms to outperform their classical counterparts lies in the ability to efficiently load the classical input of the algorithms into quantum states. However, to prepare a generic quantum state exactly requires O(2^n) gates [1], which can impede the power of quantum algorithms before they come into play. For practical purposes, Quantum Machine Learning (QML) can be adopted to approximate the desired loading channel via training. Quantum Generative Adversarial Network (qGAN) in particular has shown great promise in accomplishing the task with O(poly(n)) gates [2]. Similar to its classical counterpart, qGAN consists of both a generator for synthesizing data to match the real data and a discriminator for discerning real data from the product of the generator. The difference between the two is that qGAN has a quantum generator for approximating the quantum state, and the discriminator can be either classical or quantum depending on whether the input data is classical or quantum [3]. Generally, the qGAN trains its generator and discriminator alternatively in the form of a zero-sum game, and ends the training when the relative entropy (i.e. the difference between the real data and the synthesized one, one measure of the training performance) converges to ~0 [2].

For our project, we aim to use the quantum advantage of qGAN to demonstrate the efficient loading of multi-dimensional classical distribution with a classical discriminator. We will use PennyLane and its Cirq plug-in to construct the quantum circuit for the task. The simulation and training will be carried out using the Floq simulator and Tensorflow respectively. In the first stage of the project, we will complete a working code for our goal. Then, we will focus on the optimization on of the rate of convergence. The ideas under development include – 1) optimize the starting distribution for learning; 2) look into structural variation between the generator and the discriminator. More specifically, to learn a set of randomly IID generated distributions from another distribution consecutively, one may take the windowed average of the convergence values as input for the next sample. On the other hand, one may employ stochastic gradient decent (SGD) on the generator and discriminator alternatively to optimize the structure. The optimization ideas are not set and will be finished given the time condition. Finally, we will benchmark the convergence performance and the fidelity from our qGAN on the IonQ Q11 and TN1 simulators, showing the improvements from optimization.

Source code:

https://github.com/Allenator/variational-structure-qgan-draft/

Resource Estimate:

The initial testing will be carried out on Floq, which offers free access. Therefore, we only need to spend on simulations for testing the final results. Due to potential issues with the lackluster connectivity of Rigetti Aspen-9, we choose IonQ Q11 for final testing. IonQ Q11 costs:

  • $0.30 per task + $0.01 per shot

And we will need ~10^4 shots are needed to study distribution spread across O(2^n) states, which gives us:

  • $10.3 per task

~400 tasks are required for the study of variational structures, which gives us in total:

  • ~$4000 cost

We also plan to use TN1 for cross checks, but given that TN1 costs $0.275/min and we require <= 50 qubits and <= 100 gate depth, then with parallelism of five 10-qubit circuit running for ~18 hours, we will only arrive at ~$250. This is modest compared to the IonQ cost, and can be covered with our previous AWS credit from challenge.

Reference:

[1] Grover, L. K. Synthesis of quantum superpositions by quantum computation.
Phys. Rev. Lett. 85, 1334–1337 (2000).
[2] Zoufal, C., Lucchi, A. & Woerner, S. Quantum Generative Adversarial Networks for learning and loading random distributions. npj Quantum Inf 5, 103 (2019).
[3] PennyLane dev team, Quantum Generative Adversarial Networks with Cirq + TensorFlow (2021).

[Power Up] Determining Efficient Multiplication Circuits via Brute Force

Team Name:

Sloppy Joe Pirates

Project Description:

Introducing mulbrute. mulbrute is a tool that brute forces over every possible combination of PauliX/CNOT/Toffoli in the attempt to find more efficient quantum multiplication circuits.

It includes a terminal UI (blessed) to track the progress of the brute forcing. Under the hood it uses pennylane's qubit simulator to validate the multiplication truth tables.

mulbrute_1bit

So far it's been able to determine two different 1bit multiplication circuits.

More details

It turns out doing integer multiplication in quantum circuits is not as simple as one might guess.[1][2][3]. Multiplication requires a number of ancilla qubits that make it infeasible to use on real quantum computers today. But being able to multiply numbers is hugely important for a number of number theory problems (shor's), and math problems (hash cracking with grover's).

The goal of this project is to borrow a concept from computer security called "brute forcing" to see if we can construct a better multiplication circuit by randomly guessing. I'll start with 2bit multiplication and randomly construct circuits with a bunch of PauliX/CNOT/Toffoli's until they lineup with the correct multiplication table.

If successful I'll try to implement a simple hash function that uses multiplication (CRC) and then attempt to use Grover's to crack it!

[1] https://www.quantamagazine.org/a-new-approach-to-multiplication-opens-the-door-to-better-quantum-computers-20190424/
[2] https://algassert.com/computer-science/2015/07/05/Things-I-Cant-Solve-Multiplication.html
[3] https://medium.com/@sashwat.anagolum/arithmetic-on-quantum-computers-multiplication-4482cdc2d83b

Source code:

https://github.com/c0nrad/mulbrute

Resource Estimate:

I'd like to try and brute force a 3qubit * 3qubit circuit. I'm not sure if anything will popout, but I'd like to leave a m5.8xlarge live for 48 hours and let mulbrute run.

48hr * 1.536 ($/hr) = $73.73

For my project I am requesting $100.

[ENTRY] probable-gravitational-adventure

Team Name:

Team: upisnotjump

Project Description:

Detecting gravitational waves is not an easy task, it took LIGO (Laser Interferometer Gravitational-Wave Observatory), a team of 900 scientists from around 40 institutions working roughly 14 years from its operation start to the anounce, in 2016, of the first binary black hole (BBH) merge detection via gravitational waves.
In recent years the use of Deep Learning and computer vision techniques has been greatly increase the power of detecting and parameter estimation of such events.
My idea is to use the Image-GPT [https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf] in addition with the variational quantum circuits (VQC) and Quantum transfer learning [https://arxiv.org/abs/1912.08278] to boost the searches for BBH merges and neutron stars collisions.

Presentation:

https://github.com/FFFreitas/probable-gravitational-adventure/blob/main/presentation/presentation.slides.html

Source code:

https://github.com/FFFreitas/probable-gravitational-adventure

Resource Estimate:

I would like to experiment the algorithm with larger images dataset (28*28 is too brutal reduction to the algorithm catch something), also try the algorithm with a modified GPT model with more quantum layers. I also aim to expand the project to different encoding data entries and use quanvolutional algorithms too.

The dataset has 16493 images, each batch with 32 images took roughlty 11 minutes in each bach considering the circuit with only 4 qubits and 100 shots. I would like to try first the simulation with the following configurartions:
20 qubits for 4 hours training x 5 model config = U$ 90.0
30 qubits for 6 hours training x 5 model config = U$ 135.0
total: U$ 225
I would also like to take some time fining tuning the model in the simulators, so I would ask to spend more U$ 225.

in the Rigetti Aspen-8:
10000 shots x 10 model configs = U$ 38.0

in the D-Wave 2000Q:
10000 shots x 10 model configs = U$ 22.0

These are initial experiments, once I got more confidence I would scale the things to reach into the total $4000

[Power Up] [ENTRY] Qountry Songs

Team Name:

QUANTIFY

Project Description:

The diagrammatic approach to quantum computations pioneered in [1,2] has been extended to quantum circuit compilation and optimisation [3]. The latter has been successfully applied for QNLP using NISQ machines [4, 7] instead of using Grover-like, QRAM-based approaches [5]. It has been shown that QAOA methods are approximators of universal computations such as the ones expressed in the ZX calculus. There exists a similarity between the QAOA exponentiated ZZ gates and the parameterised Ry gates and the trained circuits from [4] (“language diagrams into quantum circuits with phase-gates and CNOT-gates”). QNLP as a problem of the closest vector [4, 5] shares some similarities with skip-grams and the word2vec model.

We investigate the applicability QNLP using QAOA (implemented using PennyLane) to verify the theory from [4]. We use a somewhat reverse approach to the one from [4], instead of starting from language diagrams, we start from skip-grams and train context from using windows of two words extracted from sentences.

We generate country songs using trained models. Country songs are good candidates, because these include repeating somewhat straightforward concepts: the corpus includes a lot of redundancy and and many contexts in which the words are appearing.

Our trained model will reflect the original language diagram of the corpus we started from. The semantics is embedded in the trained QAOA weights:: the strengths of the ZZ and Ry gates encode the grammatical relations.

The feasibility of our QNLP is tested, for the moment, using [8]. The model can predict with an accuracy of 65% a corpus of 31 words after 200 training rounds (10 minutes) and using only 28 variables. A corpus of 84 words (61 unique) achieves 45% accuracy after 60 min. training. We use Google Colab. Below is a sample song ("She kicks") from the latter experimentation (we added the punctuation).

Work bartender knows week
Same mind … she kicks. Don’t,
Same work end of
Dive name, but I … My … 
 
The bartender knows week drink,
My she kick. Don't mind...
Dive, but I, my name
The/she kicks … Don’t
  1. Abramsky S, Coecke B. A categorical semantics of quantum protocols. In Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science, 2004. 2004 Jul 17 (pp. 415-425). IEEE.
  2. Abramsky S. Petri nets, discrete physics, and distributed quantum computation. InConcurrency, Graphs and Models 2008 (pp. 527-543). Springer, Berlin, Heidelberg.
  3. van de Wetering J. ZX-calculus for the working quantum computer scientist. arXiv preprint arXiv:2012.13966. 2020 Dec 27.
  4. Coecke B, de Felice G, Meichanetzidis K, Toumi A. Foundations for Near-Term Quantum Natural Language Processing. arXiv preprint arXiv:2012.03755. 2020 Dec 7.
  5. Zeng W, Coecke B. Quantum algorithms for compositional natural language processing. arXiv preprint arXiv:1608.01406. 2016 Aug 4.
  6. Lloyd S. Quantum approximate optimization is computationally universal. arXiv preprint arXiv:1812.11075. 2018 Dec 28.
  7. QNLP 2019 videos on youtube, e.g. https://www.youtube.com/watch?v=Osu2SPtCvfU
  8. https://www.azlyrics.com/lyrics/jonpardi/heartachemedication.html

Source code:

https://github.com/oumjunior/Qountry-songs

Resource Estimate:

We would greatly benefit from the PowerUp, as it would enable us to implement the training in a scalable manner.

  • We expect the performance of the model to increase with larger context. For the moment we use vectors spanning multiple qubits. 30 qubit simulator would be useful. E.g. window size 3 and 512 words, 27 qubits required.

  • Reduce inference cost: use circuit identities to simplify the QAOA circuit -- potentially use: b) pyZX for ZX simplification; b) inhouse QUANTIFY tool for brute forcing circuit identities

  • Learn a full song and compose a completely new song

We will use both the IonQ machine (good connectivity) for small QOAOs, as well as classical simulators (SV1 and TN1) with noise for too large models. We estimate that $500 would be sufficient for classical simulation purposes. Training with the IonQ QPU would cost around $50 after we have converged about the architecture of the circuit. Around $250 for all the experiments and failures should be sufficient. Writing the song would cost another $100 due to the number of repetitions and shots. The previous estimation is for a minimum. The more free credits we receive, the bigger and larger circuits and songs we will try.

[Power Up] Feeding many trolls

Team Name:

Qumulus Nimbus

Project Description:

We provide a pennylane implementation of single qubit universal quantum classifier similar to that presented in [1] and [2]. We then provide an efficient method to parallely process classical data using a qram setup for the universal single qubit classifier.
We then attempt to address quantum classifiers by data reuploading for Quantum Data for experiments when we have copies of the quantum state and show it's performance, which we believe has not been done before.

We use the universal quantum classifier method and measurement strategies described in [1] to demonstrate a method of quantum music learning and generation by recasting the classifier into a markov chain like setup. We also use the qram structure we developed to combine and generate music.
[1] - https://quantum-journal.org/papers/q-2020-02-06-226/
[2] - https://github.com/AlbaCL/qhack21f

Source code:

Repository (incomplete, check the jupyter notebook here)

Resource Estimate:

We will be using Rigetti Aspen 9 to test out our quantum data quantum classifier, because of it's connectivity and the fact that our method that we propose deals with a qubit only once allowing us to move around the main qubit from one place to another.

we will be using IonQ's fully connected qubit system to implement our music prediction system for a qram of upto 3 qubits which would require 6 fully connected qubits with high fidelity, thus using it.

We would be using up to 800 to 1000 dollars worth, especially our music prediction requires multiple training and time samples, and any remaining would be useful later on.

[Power Up] Telling quantum DoQs and quantum Qats apart

Team Name:

Quant'ronauts

Project Description:

Idea: we classify regions of the Hilbert space, of quantum states of n qubits. There are 2 categories, "Qat" and "DoQ". As an example, for n=1, one hemisphere of the Bloch sphere could be labelled "Qat", the other hemisphere "DoQ". The state vectors to classify are generated as the output of a sensor, which is then fed into a classifier circuit of M layers. Note that we are NOT classifying the classical params vector of the sensor, as we could use any other sensor with different parameterization as long as it's capable of producing Qat and DoQ states. Also, we take the sensor as is, we don't try to "optimize" it.

Catch: during operation, the sensor can only produce its output once. Thus, when we calculate the accuracy on the test set, we are not allowed to make use of expected values resulting from many shots. There is only 1 shot (in the training phase, we can optimize using expected values, as training is done in our laboratory where we can recreate the sensor outputs of the training set at will). We'd like to experiment how much the accuracy drops due to this 1-shot limitation, whether it's different using simulator vs real quantum hardware, and what kind of cost function would reduce this impact.

idea

Extra: if multiple shots are allowed, how much would a data re-uploading scheme improve the accuracy? E.g. imagine there are M identical sensors located very close to each other. When a certain physical event happens, it sets all the parameters of the M sensors at once, identically for each sensor. Then, the parameters don't change until the next event. Furthermore, there may be exponentially many parameters of the sensor, inaccessible to us. So again, we are classifying quantum states.

extra

Source code:

https://github.com/mickahell/qhack21

Resource Estimate:

We need the extra credit to see in more detail how the use of real quantum hardware influences the accuracy of the classifiers, as well as the accuracy gap between the different options mentioned above.

After simulation, we plan to try 4 candidate circuits, of 1, 2, 5, and 10 wires, respectively, using a Rigetti device. We'll use gradient descent for training, 50 steps, in batches of 10, calculating expected values using 30 shots. So, if we calculate with an average of 60 variables per circuit to optimize, altogether 1 training session will require 50x10x30x60x2=1'800'000 shots (x2 is there due to parameter shift). There will be 4 different systems to train, so a total of 4x1'800'000=7'200'000 shots for training.

Our test set has 200 items. For each of the 4 circuits, we'll compare 2 options, one using expected values of 30 shots, and 1 using only 1 single shot. So the total number of shots required is 4x200x30+4x200=24800.

This estimation still has a buffer for the case when the simulation phase makes us change some of the figures, and/or if we want to try the IonQ device as well.

Alternatively, we might train the circuits locally and do ONLY the testing phase using real quantum hardware, that would enable us to try much more than 4, already trained circuits.

Beam Bending Simulation

Team Name:

Quantumalpha

Project Description:

The simulation of mechanical systems requires PDE which is a costly function when the dimensions are increased, FEM is used to simulate the system. we can use the differentiation and gradient findings to make it better.

Source code:

https://github.com/Qudsiaamir/Qhack.git

Resource Estimate:

Recent interest shows that Quantum Computing can speed up thing for FEA.
Mostly we are more focused on finding better ways to solve the problem the extra credit can help us build better scale solutions with lot of trails.

[Power Up] Variational Language Model

Team Name:

TeamX

Project Description:

In this project, we developed a variational quantum algorithm for Natural Language Processing. Our goal is to train a quantum circuit such that it can process and recognize words. Applications varies from word matching, sentence completion, sentence generation and more. We use state-of-the-art deep learning word embedding and amplitude encoded quantum register, with a new ansatz and training methodology to perform this task, based on the swap test between words.

Source code:

https://github.com/Slimane33/qhack_project

Resource Estimate:

We can use AWS SV1 for parallelizing the gradient during the training. But the computational cost remains high due to the number of sentences and the total number of words in the dictionary.

With the current resource available, we estimate the training to be

  • For 10k sentences with 10 words per sentence / 2 qubits per word / 2 layers -> 4 days

  • For 10k sentences with 7 words per sentence / 3 qubits per word / 2 layers -> 10 days

  • We have started to generate a synthetic dataset to limit the resource consumption
    In any case, we might need more resources from AWS.

  • Number of qubits required: The quantum circuit to train corresponds to one sentence plus an extra word and an ancillary qubit, therefore Q*(N+1)+1 qubits. N being the number of words and Q the number of qubits per word.
    e.g :
    for a 4 words sentence with 3 qubits per word, we require 16 qubits.
    for a 5 words sentence with 4 qubits per word, we require 25 qubits.

  • Number of trainable parameters: The number of trainable parameters in the ansatz is around Q*(1+N/2)*L, where L is the number of layers, on average (it depends on the parity of the number of words and the number of qubits).
    e.g
    for a 4 words sentence with 3 qubits per word and 3 layers, we require 27 parameters.

[Power Up] Performance Evaluation of Hybrid Quantum-Classical Object Detection Networks

Team Name:

QuantumTunnelers

Project Description:

Our project aims to create a hybrid model of popular object detection networks. Primarily, we are focusing on RetinaNet with a MobileNet (and possibly ResNet-18) feature extraction backbone. Our goal is to introduce quantum layers and measure various performance statistics such as mean Average Precision (mAP) and the number of epochs taken to reach a comparable Loss value.

The main layer we are focusing on is the convolutional layer. Using a modification of both the original Quanvolutional layer model introduced in Henderson et al. (2019) and the demo found on PennyLane, we custom built a quantum convolutional layer that takes in any kernel size and output layer depth as parameters, automatically determines the correct number of qubits needed, and outputs the appropriate feature map using a quantum circuit as its base.

We plan to replace key convolutional layers within RetinaNet with our custom quanvolutional layer and measure the aforementioned performance statistics. We hope to see improvement within the statistics and hope to extend this project to other popular networks after this Hackathon.

Currently, we have trained and evaluated a custom-made backbone to test our quanvolutional layer due to MobileNet's architecture being too resource-consuming for our laptops. We plan to use AWS servers to properly train our hybrid backbones. For more details and information about our progress, please visit our GitHub repository.

Source code:

Our GitHub Repository: QuobileNet

Resource Estimate:

We have a hybrid model that costs too much time and resources to train on our current hardware. Therefore, we plan to train a 30 Qubit QCNN hybrid model using the Floq service. We plan to use the AWS service to test the quality of our results by comparing the inference performance of our QuobileNet with the classic RetinaNet (+ MobileNetV2 backbone) inference. The resource estimates for inference are as follows:

Inference with QCNN:
Kernel Size: 3x3
Input Image: 10x10
Number of Executions per QCNN layer: (10-3+1)^2 = 64
Number of input images: 50
Cost of 30 Qubit Circuit Execution with 1000 shots (Aspen-9): 0.35+0.30 = 0.65
Cost per QCNN layer: 2080$

We can afford 2 QCNN layers that add up to 4160$ in total. We haven't used the initial 250$ credit yet, as we planned to use it for our final model. With the 4000$ bonus credit we will be able to test our model.

Good luck to everyone!

[Power Up] Continuous Variable QAOA

Team Name:

Unfortunately, we're not Shor

Project Description:

We plan to fully understand and implement QAOA using Strawberry Fields to solve continuous optimization problems. We will follow the general strategy outlined in https://arxiv.org/pdf/1902.00409.pdf which demonstrates that QAOA can encode Grover's search over continuous spaces. This paper only tests the algorithm on a single numerical example, however we would like to run it across multiple examples to help benchmark its effectiveness.

Source code:

https://github.com/ejdanderson/continuous-qaoa

Resource Estimate:

We plan on implementing the algorithm in Strawberry Fields to simulate (and/or run on) a photonic CV quantum computer. However, we would use ideally use AWS to run the simulation and classical parameter tuning portion of the algorithm. Another route would also be to use other platforms qubit implementation approximates the CV degrees of freedom.

https://arxiv.org/pdf/1902.00409.pdf

We have found numerous resources for optimization problems to target such as the ones found on this page https://www.sfu.ca/~ssurjano/optimization.html and plan to target the Powell and Colville functions, Colville being 4D where Powell is an n-dimensional.

@Jaybsoni @trentfridey

[Power Up] Sound Classification using Quanvolutional Neural Networks

Team Name:

Two Bits in a Box

Project Description:

Sound classification is one of the popular topics in the classical machine learning literature eg.[1],[2]. One of the used methods is applying CNN to the spectrograms of the sound samples. Nevertheless, we couldn't find similar applications in the Quantum Machine Learning literature.

In this project we aim to use Quanvolutional Neural Networks to classify sound using this kaggle dataset. We will mainly compare the performance of the Quanvolutional Neural Networks to the equivalent classical CNN implementation, and explore techniques in the Quantum Machine Learning literature that can enhance the existing classical ML techniques.

Source code:

https://github.com/heba0/Sound-Classification-using-Quanvolutional-Neural-Networks

Resource Estimate:

The AWS credit will help us experiment better with Quantum Computing resources.
Our model will use around 3 layers with 3x3 kernels -> 3x3x3 = 27 qubits per task
We would like to use the Rigetti with 2000 shots
The training and testing datasets have around 9700 samples (can be sampled to smaller datasets)

Our Estimation is:
Training Sample = 0.3x400 + 0.3x400
Testing Sample = 0.3x400
Shots = 0.00035x10000
Total = $903.5

References:

[1] Jaiswal, K. and Kalpeshbhai Patel, D., 2018. Sound Classification Using Convolutional Neural Networks. 2018 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM),.

[2] Davis, N. and Suresh, K., 2018. Environmental Sound Classification Using Deep Convolutional Neural Networks and Data Augmentation. 2018 IEEE Recent Advances in Intelligent Computational Systems (RAICS),.

[Power Up] Event classification with data-reuploading in High Energy Physics

Team Name:

Entangled_Nets

Project Description:

The large experiments conducted in the field of particle physics require the detection and analysis of data produced in particle collisions that occurred using high-energy accelerators such as the LHC [2]. In these experiments, particles that are created by collisions are observed by layers of high-precision detectors surrounding the collision points, which produces large amounts of data about the collision. This motivated the use of "classical" machine learning techniques in different aspects to improve the performance and analysis of the data. Moreover, these developed techniques are also adapted to quantum computing, e.g, the unfolding measurement distributions via quantum annealing [3]. Intending to take advantage of both fields, the techniques in quantum machine learning, which are considered as one of the quantum computing algorithms that could bring quantum advantages over classical methods [4][5], will be used.

Furthermore, since the development of quantum hardware with a sufficient number of qubits is still in progress, circuits that make use of fewer qubits are more plausible to consider. Besides, such circuits may prove relevant even if they do not provide any quantum advantage, since they may be useful parts of larger circuits. We will use the idea of data reuploading discussed by Pérez-Salinas et al. [6], where it is shown that it's possible to load a single qubit with arbitrary dimensional data and then use it as a universal quantum classifier.

This project aims to use the method of data-reuploading, where qubits will be used as quantum classifiers to classify a certain dataset with high accuracy, and parametrized quantum circuit, whose variables are used to construct a cost function that should be minimized "classically". For our model, the SUSY dataset [1] will be considered.

[1] SUSY Data Set - UCI Machine Learning Repository

[2] Event Classification with Quantum Machine Learning in High-Energy Physics

[3] Unfolding measurement distributions via quantum annealing

[4] Quantum Computing in the NISQ era and beyond

[5] Quantum Machine Learning in High Energy Physics

[6] Data re-uploading for a universal quantum classifier, Adrián Pérez-Salinas, Alba Cervera-Lierta, Elies Gil-Fuster, José I. Latorre

Source code:

The draft source code

Note: This is a draft code for the initial entry for the AWS Power-up. The final source code will be modified and submitted next within the final deadline.

Resource Estimate:

We intend to use the power-up prize to further investigate the algorithms and try different approaches to increase the accuracy of our model using simulators. Besides testing the developed model on the quantum hardware access provided by AWS.

Aspen-8:
1 task x $0.30 / task = $0.30
Shots charges: 1,000 shots x $0.00035 / shot = $0.35 Total charges/Task: $0.65 = $0.30 + $0.35

1Qubit testing: Task charges:
Number of Tasks: 1000
Total charges: $650=1000x$0.65

2Qubits testing:
Number of Tasks: 1000
Total charges: $650=1000*$0.65

1 Qubit training:
Number of Tasks: 2001x0=20000 (10 epoch 200 tasks/epoch)
Total charges: $1300=2000x$0.65

2 Qubits training:
Number of Tasks: 200x10=20000 (10 epoch 200 tasks/epoch)
Total charges: $1300=2000$0.65

Total resource estimation for all objectives: $3900

[Power Up] [submission] Quantum-Aided Medical Image Diagnosis

Team Name:

qt

Project Description:

Quantum-Aided Medical Image Diagnosis

Objective

Invasive ductal carcinoma (IDC) is - with ~ 80 % of cases - one of the most common types of breast cancer. It's malicious and able to form metastases which makes it especially dangerous. Often a biopsy is done to remove small tissue samples. Then a pathologist has to decide whether a patient has IDC, another type of breast cancer, or is healthy. In addition, sick cells need to be located to find out how advanced the disease is and which grade should be assigned. This has to be done manually and is a time-consuming process. Furthermore, the decision depends on the expertise of the pathologist and his or her equipment. Here, I'm proposing to use Quantum Genetic Algorithm (QGA) and Support Vector Machines (SVMs). I hope this method will be having effective results when compared to some of the standard approaches. This way one would be able to overcome the dependence on the pathologist which would be especially useful in regions where no experts are available.
Also, after classifying images using QGA and SVMs, I will use Quanvolutional Neural Networks (QNN) or a hybrid quantum-classical model which can have the advantage over the classical approach and make a comparative analysis with standard approaches like Convolutional Neural Networks (CNN)
Note- Finally testing this approach on different Quantum Devices and Simulators and come up with final results.

Dataset:

Context

Invasive Ductal Carcinoma (IDC) is the most common subtype of all breast cancers. To assign an aggressiveness grade to a whole mount sample, pathologists typically focus on the regions which contain the IDC. As a result, one of the common pre-processing steps for automatic aggressiveness grading is to delineate the exact regions of IDC inside of a whole-mount slide.

Content

The original dataset consisted of 162 whole mount slide images of Breast Cancer (BCa) specimens scanned at 40x. From that, 277,524 patches of size 50 x 50 were extracted (198,738 IDC negative and 78,786 IDC positive). Each patch’s file name is of the format: uxXyYclassC.png — > example 10253idx5x1351y1101class0.png. Where u is the patient ID (10253idx5), X is the x-coordinate of where this patch was cropped from, Y is the y-coordinate of where this patch was cropped from, and C indicates the class where 0 is non-IDC and 1 is IDC.

Source code:

Code

References:

Resource Estimate:

We intend to use the power-up prize to further investigate the algorithms and try different approaches to increase the accuracy of our model using simulators and quantum hardware provided by AWS

Original Dataset Size- 278k
Number of records going to consider in the base model- 5k
Number of Shots- 500
Number of iterations- 2 or 3

Cost Estimation:

Rough Cost breakup of 250USD (got as being in top 40 team) + 4000USD (POWER UP if given)

Hardware Estimated Cost
D-Wave $950 = 5000 * 500 * 2 * 0.00019
Rigetti $2625 = 5000 * 500 * 3 * 0.00035
Simulation $675

First design and test the model on the simulator, for which I'm taking a rough estimation of around $650+, after that run the code on Quantum Devices to get actual results and compare results.
Note- This is just a rough estimate, actual cost may increase/decrease based on the usage

Future Work:

I'm pursuing my MS, so I will take forward this research as my final dissertation and will:

  • Develop a novel Quantum Algorithm that can be used for medical image diagnosis and test the results with different datasets available publicly
  • Optimize the circuit and see which approach works better with which set of datasets and circuit combination
  • Design API and web app to provide services in healthcare image analysis

[Power Up] QNN-for-Thermodynamic-correlation

Team Name:

Coherence

Project Description:

This work proposes a Quantum neural network-based methodology to estimate frictional pressure drop during boiling in mini-channels of non-azeotropic mixtures including nitrogen, methane, ethane, and propane. The methodology can assist in thermal analysis or design of heat exchangers used in cryogenic applications. The architecture of the proposed model includes the local quality, roughness, mass flux, and Reynolds number as inputs and frictional pressure drop as outputs.
It will compare with one paper where my colleagues and I use the same data to create an ANN-based correlation for pressure drop estimation in microchannels [1].

[1] Barroso-Maldonado, J. M., Montañez-Barrera, J. A., Belman-Flores, J. M., and Aceves, S. M. (2019). ANN-based correlation for frictional pressure drop of non-azeotropic mixtures during cryogenic forced boiling. Applied Thermal Engineering, 149(August 2018), 492-501. https://doi.org/10.1016/j.applthermaleng.2018.12.082

Source code:

https://github.com/alejomonbar/QNN-for-Thermodynamic-correlation

Resource Estimate:

My project consists of two stages, the first is to explore different circuit configurations to encode the inputs and the number of parameters used to determine the pressure drop correlation. Then, I need to run at least 20 different configurations which should be 2 hours of SV1 ($9). Second, I would like to see how noise affects the model then I need to use one of the quantum devices available in bracket. Training is an impossible task because I have almost 5000 sample data. Therefore, I'm going to use the test set that is composed of 693 samples to compare the solution using one of the quantum devices and the ideal pressure drop once the parameters are optimized. This means (693*100shot*0.01$/shot) + 693*0.3Task = 0.01*693*1000 + 0.3*693 = $900.9.

My first training using a gives me better results (error 8%) than those we obtained in the paper presented above (error 9-9.5%). This gives me the intuition that with the correct layer configuration we can outperform the ANN results.

[Power Up] Quantum CV RNN

Team Name:

QuantumMadness

Project Description:

This project is an attempt to build a quantum recurrent neural network using the continuous variable quantum model. As sources of inspiration a quantum qubit based RNN architecture which allows to reuse input wire, the quantum neural network tutorial from strawberryfields.ai and the Elman RNN architecture were taken.

Due to the hight computational cost for the simulation of Fock space only primitive tests can be accomplished. RNN was tested to be suitable for learning XOR function from sequences of zeros and ones. From an investigation done so far it seems to be more sensitive to learning rate than its classic analog. Also the RNN circuit is quite long what may lead to a situation when a state created by the circuit is physically incorrect due to computational mistakes led by limited Fock dimensional size.

Source code:

repo

Resource Estimate:

We would like to get use a photonic CV quantum computer to test the proposed architecture on more complex tasks ML like the speech recognition which require RNNs with input and hidden sizes significantly bigger that 5 in summary.

[Power Up] Quantum Algorithms

[Power Up] Tackling quantum phase transitions and barren plateaus in VQE with tensor networks

Team Name:

Hooked on Photonics

Project Description:

One of the most promising applications of variational quantum algorithms is the study of condensed matter phenomena, such as quantum phase transitions, with near-term quantum computers. Yet a major challenge in the successful application of variational quantum algorithms is the barren plateau phenomenon, where gradients become exponentially small in the number of qubits [1]. While barren plateaus have been observed for certain types of variational quantum circuits and cost functions [2], it is unclear whether the phenomenon would significantly hinder the simulation of condensed matter systems. Our goal in this project is to explore this problem and develop strategies for avoiding barren plateaus in the study of quantum phase transitions.

In particular, we use the variational quantum eigensolver (VQE) [3] to find the ground states of the transverse field Ising model, a spin chain whose ground state is known to undergo a quantum phase transition. To avoid the barren plateau phenomenon in our analysis of this model, we train variational circuits with physically relevant structure, such as tree tensor networks (TTN) and the multi-scale entanglement renormalization ansatz (MERA) [4]. The MERA, a tensor network used to study quantum critical systems, is particularly well-suited to our task. We show that for our problem TTN’s and MERA’s generally produce larger gradients than a hardware-efficient ansatz (HEA) typically used in VQE and thereby are easier to train and help alleviate barren plateaus.

[1] McClean, J.R., Boixo, S., Smelyanskiy, V.N. et al. Barren plateaus in quantum neural network training landscapes. Nat Commun 9, 4812 (2018).
[2] M. Cerezo, Akira Sone, Tyler Volkoff, Lukasz Cincio, Patrick J. Coles. Cost-Function-Dependent Barren Plateaus in Shallow Quantum Neural Networks. arXiV:2001.00550 (2020).
[3] Peruzzo, A., McClean, J., Shadbolt, P. et al. A variational eigenvalue solver on a photonic quantum processor. Nat Commun 5, 4213 (2014).
[4] G. Evenbly and G. Vidal. Algorithms for entanglement renormalization. Phys. Rev. B 79, 144108 (2009).

Source code:

https://github.com/echertkov/qhack_vqe_ttn

Resource Estimate:

In our project, we are performing VQE optimizations over a significant space of variables. We are considering three different variational ansatz (TTN, MERA, HEA), many different Hamiltonian parameters, and many different numbers of qubits. For reference, a single 16-qubit VQE optimization step of a MERA ansatz takes us about 40 seconds (or 2.8 hours for full training) and we need to explore about 100 different parameter values, so this altogether takes about 280 hours. However, we would like to perform additional simulations with more qubits to clearly show how quantities such as the variance of cost-function gradients scale with qubit number. Measurement of a TTN gradient at 18 qubits takes around 4.5 minutes on Amazon Braket’s SV1 simulator per sample, so to fully explore multiple ansatz and parameters we estimate that we need around 30 hours of compute time, which would cost 0.075*1800=$135. Since the computational cost increases exponentially with the number of qubits, to perform simulations with more than 18 qubits we would need significant additional computational resources. Additional AWS credits will allow us to utilize Braket’s simulators and perform these large simulations.

We would also like to use our AWS credits to perform calculations on the IonQ QPU. In particular, we intend to observe the quantum phase transition in the TFI model by measuring observables such as and magnetizations on our trained ansatze. The cost of this for a single ansatz would be (8 magnetic field values) * (2 observables) * ($0.30000+$0.01*2000 shots)= $324.8 / ansatz and we would like to consider two ansatz, the TTN and the MERA.

Time permitting we could additionally study if our results extend to other models, such as the transverse plus longitudinal field Ising model, which is not classically solvable. This should incur similar costs and time, potentially doubling our estimated AWS expenses.

Total estimate:

280 Training the ansatze (large instances)
270 Measuring gradient variances for the ansatze (estimate for larger qubits)
650 Measuring observables on IonQ’s QPU
===
$1,200.00 ($2,400 with second Hamiltonian)

[Power Up] Quantum Enhanced GAN for HEP

Team Name:

QC@UCI

Project Description:

To enhance the Generative Adversarial Networks (GAN) used in the High Energy Physics (HEP) community for fast event simulation with Quantum Circuit Born Machine (QCBM), a versatile and efficient quantum generative model, to sample the prior (latent space). The quantum enhanced architecture, Quantum Circuit Associative Adversarial Network (QC-AAN), was shown previously to not only have similar performance as DCGAN but also have practical quantum advantages such as greater training stability on MNIST [1]. Instabilities of the training caused by diverging gradient and vanishing gradient are a major practical concern, especially for the HEP community*[2]. So, if a QC-AAN can make the training for GANs more robust, we would expect it to have practical value for the HEP community. We plan to build upon CaloGAN [3], a popular architecture to generate HEP detector responses and use vanilla CaloGAN as a baseline for comparison.

* To overcome the training instability, HEP community often uses Wasserstein GANs. Due to time constraints, we plan to investigate a quantum enhanced Wassertein GANs in the future.

Procedure

  • Construct QC-AAN with multi-basis QCBM and CaloGAN

  • Run experiments on the ECal Shower dataset [4] and compare QC-ANN against vanilla CaloGAN with the metrics in the next section

  • If time permits, repeat the experiments and compare it against

    • Wasserstein CaloGAN
    • Restricted Boltzmann Machine (RBM) based AAN

Metrics

  • Inception score
  • HEP based similarity score
    • 1-D showering statistics
    • Energy flow polynomials (EFPs)
  • Training stability
  • Mode (energy) diversity

Source code:

QC-UCI/QHack

Resource Estimate:

We plan to use Floq for quantum circuit simulations. The AWS budget will mainly be spent on training QC-AAN, both the classical GAN trianing and the quantum QCBM circuit training. A rough cost estimate:

  • Training classical CaloGAN on K80: 10 * (12 mins/epoch * 50 epochs) * $1.125 per hour = $110.25
  • Training QCBM (N qubit) with CaloGAN on Rigetti: 10 * (10E4 shots/measurement basis * 2^N * 10 epochs) * $3.5E-4

So, the cost of training QCBM dominates our budget and we probably need ~$1k to get meaningful results. Note if the cost of trianing QCBM gets too large, we might be able to do some tricks by sampling or freezing weight after certain epochs. Thanks again to Floq and AWS for those wonderful computing time!

Reference

[1] M. S. Rudolph, N. B. Toussaint, A. Katabarwa, S. Johri, B. Peropadre, and A. Perdomo-Ortiz, Generation of High-Resolution Handwritten Digits with an Ion-Trap Quantum Computer, (2020).

[2] A. Butter and T. Plehn, Generative Networks for LHC Events, ArXiv:2008.08558 [Hep-Ph] (2020).

[3] M. Paganini, L. de Oliveira, and B. Nachman, CaloGAN: Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks, Phys. Rev. D 97, 014021 (2018).

[4] Nachman, Benjamin; de Oliveira, Luke; Paganini, Michela (2017), “Electromagnetic Calorimeter Shower Images”, Mendeley Data, V1, doi: 10.17632/pvn3xc3wy5.1

[Power Up] Step globally, grad locally: circuit optimization with GRAFS

Team Name:

zer0dynamics

Project Description:

Gradient ascent in function space (GRAFS) [1] is an algorithm for optimal control synthesis that leverages functional expansions of control fields, and gradient-based optimization on the reduced parameter space formed by the coefficients of the expansion. This QHack project investigates whether the GRAFS method can be used to optimize quantum circuits of the type that instantiate variational algorithms for quantum machine learning. Quantum control generally works at the physical, analog layer of quantum computing to design a single or two-qubit gate with some desired properties, and as such, it's not immediately clear how it can be used with quantum circuits. It turns out to be possible, as this submission shows, by exploiting an expression of the chain-rule (Eq. 18 in [1]) that relates the local gradient (computed at each time slot) to the global gradient (expressed in terms of the coefficients of control expansion). Circuit parameterizations with respect to basis functions are global in the sense that all layers of the network are parameterized by the specification of the basis function coefficients.

Crucially for implementation, the parameter-shift rule for computing analytical gradients on quantum hardware can be used to obtain the local gradients which are then used to construct the global gradients on a classical computer. While each gradient calculation requires the same number of circuit evaluations as vanilla gradient descent, we find in simulations that the GRAFS method converges to the minimum with fewer iterations, and thus provides some resource savings.

[1] D. Lucarelli https://arxiv.org/abs/1611.00188

Source code:

https://github.com/zerodynamics/pennylane/blob/master/IPYNB/GRAFS_QML.ipynb

Resource Estimate:

Runs on quantum hardware will be used to assess whether the GRAFS algorithm outperforms vanilla gradient descent on a low-depth circuit. We will compare number of gradient descent iterations required for convergence and the objective function minimum. Of particular interest is whether the GRAFS method show robustness to noisy gradients computed with the parameter-shift rule. Time (and credits) permitting, the barren plateau problem will be investigated.

The first set of runs will be with 4 qubits on IonQ hardware. The number of shots required per gradient call is difficult to predict, so let's say 500/circuit evaluation.

Estimates are made with the following formula:

num_parameters/op (2) x num_qubits (4) x num_layers (5) x num_shifts (2) x num_shots (500) x cost_per_shot ($0.01) === $400/gradient descent iteration ( ! )

[ENTRY] Quantum CV RNN

Team Name:

QuantumMadness

Project Description:

This project is an attempt to build a quantum recurrent neural network using the continuous variable quantum model. As sources of inspiration a quantum qubit based RNN architecture which allows to reuse input wire, the quantum neural network tutorial from strawberryfields.ai and the Elman RNN architecture were taken.

Due to the hight computational cost for the simulation of Fock space only primitive tests can be accomplished. RNN was tested to be suitable for learning XOR function from sequences of zeros and ones. From an investigation done so far it seems to be more sensitive to learning rate than its classic analog. Also the RNN circuit is quite long what may lead to a situation when a state created by the circuit is physically incorrect due to computational mistakes led by limited Fock dimensional size.

Presentation:

Check README.md

Source code:

repo

[ENTRY] Qarameterized Circuits: Quantum parameters for QML

(submission under construction)

Team Name:

PhaseliciousDeinonyqus

Project Description:

Typically, variational quantum circuits are parameterized by classical parameters and the circuit is evaluated by minimizing an observable-based cost function using classical optimization techniques.

What if we parameterize quantum circuits using quantum parameters?
Can we train such circuits in a manifestly quantum manner?

Enter Qarameterized Circuits (Quantum-parameterized Circuits). In this project we

  1. Construct variational circuits parameterized by control quantum registers, whose computational basis states correspond to different values for the circuit parameters.
  2. Construct a quantum oracle to coherently evaluate the state of the control registers, based on a chosen cost function.
  3. Train the circuit using a modified version of Grover's algorithm, which preferentially amplifies the good states of the control registers.

This project builds on the findings of "Non-Boolean Quantum Amplitude Amplification and Quantum Mean Estimation", arXiv:2102.04975 [quant-ph].

Presentation:

https://peterse.github.io/groveropt/

Source code:

https://github.com/peterse/groveropt

[ENTRY] Variational Language Model

Team Name:

Team X
(Slimane Thabet & Jonas Landman)

Project Description:

In this project, we developed a variational quantum algorithm for Natural Language Processing. Our goal is to train a quantum circuit such that it can process and recognize words. Applications include word matching, sentence completion, sentence generation and entity recognition. We use state-of-the-art deep learning word embedding and amplitude encoded quantum register, with a new ansatz and training methodology to perform this task, based on the swap test between words.
We have successfully trained our circuit for 12h thanks to AWS SV1 Power Up Credits, and tried our trained circuit on basic applications.

Presentation:

https://github.com/Slimane33/qhack_project/blob/main/QHack.pdf

Source code:

https://github.com/Slimane33/qhack_project

[Power Up] Batch Single-Qubit Circuit Training for Image Classification

Team Name:

Qwerty

Project Description:

In this project, we explore the trainability of quantum circuits that consist of a single qubit or a small number of qubits (wires), for the image classification task of CIFAR-10 [1]. This is motivated from the following observations and thoughts.

  1. A single-qubit circuit can be trained to represent a classification function with 3 classes. (We used a single-qubit circuit to solve the QML Challenge problem "Circuit Training 500".) With higher training accuracy, a single-qubit circuit could represent a function with 10 classes.

  2. The input of colored image classification task generally has 3 channels, representing R, G, and B intensities of pixels. On the other hand, independent rotation operations can be performed on a qubit along 3 axes, namely, X, Y, and Z. Therefore, a pixel of an input image can be naturally encoded into a rotation on a qubit.

  3. With a small-number-of-qubit quantum circuit representing a classification function, the batch training could be done by running those circuits in parallel.

The plan is as follows:

  1. Train a single-qubit circuit for the CIFAR-10 task. (partially done)
  2. Train a 2-qubit circuit, by splitting the above single-qubit circuit into two parts, where the second one performs the reverse operations of the second half of the original single-qubit circuit.
  3. Train a 4-qubit circuit.
  4. Train a 8-qubit circuit.
  5. Draw a graph showing the time-space tradeoff of quantum circuits for the same task.

Source code:

https://github.com/jkwn/Qwerty

Resource Estimate:

We are planning to use the IonQ device.
Step 1) 100 tasks, 2000 shots ($300 + $200)
Step 2) 100 tasks, 2000 shots ($300 + $200)
Step 3) 100 tasks, 2000 shots ($300 + $200)
Step 4) 100 tasks, 2000 shots ($300 + $200)

[1] https://www.cs.toronto.edu/~kriz/cifar.html

[Power Up] Quantum Spectral Graph Convolutional Neural Networks

Team Name:

QUACKQUACKQUACK

Project Description:

Over recent years, a large influx of interest has been observed in classical machine learning regarding the research into and usage of Graph Neural Networks (GNN). Part of the reason for this interest is due to their innate ability to model vast physical phenomena through the medium of pair-wise interactions between the elements of the systems. Similarly , interest in Quantum Machine Learning models is also increasing, as such architectures can leverage the computational efficiency of quantum computers and offer problem tailored solutions by handcrafting antsatze guided by physical interactions. Consequently, we believe that combining these separate ideas will offer mutual benefits and improve model performance and advanced research in both fields. Seeing how GNNs are used to solve combinatorial tasks Combinatorial optimisation and reasoning with graph neural networks by Cappart et al included in workshops such as “Deep Learning and Combinatorial Optimisation” help at IPAM UCLA., we would argue that it is the right time to start thinking more about Quantum Graph Neural Networks (QGNN).

We propose to implement Quantum Spectral Graph Convolutional Neural Networks (QSGCNN) as described in Verdon et al.. We are planning to use the Pennylane documentation on Quantum Graph Recurrent Neural Networks (QGRNNs) as a guideline, and we will replace the RNN layer with a spectral convolutional layer. In particular, we want to perform unsupervised graph clustering as described in Verdon et al.. We specifically want to compare the performance and inference speed between classical GNN models and their quantum counterparts on simple datasets, such as the one in Verdon et al. or k-core distilled popular GNN benchmark datasets (e.g. Cora, or Citeseer). This would primarily include the most popular and basic models based on the SGCNNs and as a stretch goal also on GraphSAGE. The results would be then compared with standard graph partitioning algorithms.

Source code:

https://github.com/bossemel/QHack_Project/tree/main

Resource Estimate:

We expect that the clustering performed by these models on these very small datasets won’t be insightful enough. Therefore, in order to obtain meaningful results from this experiment, we will need to train quantum models on graphs with a reasonable number of vertices and edges. We observe that the number of qubits required in the ansatz for QGNNs scales linearly with the number of vertices of the graph, and consequently it would be infeasible for us to demonstrate a meaningful application of the QGSCNN without using either a high-tech simulator, or by using an actual quantum device. Therefore, while we are unable to provide explicit costing projections at this time, we can say with certainty that having access to AWS credits will allow us to produce a much more impactful project on this topic.

[ENTRY] Quantum Chess Engine

Team Name:

let bit be

Project Description:

The main goals of this project are the visualization of the main principles of the quantum computer as well as to demonstrate its performance. To show this, we apply our quantum algorithms to one of the most famous board games of all time, namely chess. We construct a variational algorithm to classify the best possible moves in a game of Microchess.

Microchess features a 5x4 board on which 5 of the 6 different chess pieces are placed (there are still trillions of possible constellations). Thus, it makes it perfectly suitable for a quantum computer as it can be implemented using only 21 qubits (20 + 1 ancillary qubit). The piece of each square is encoded into a single qubit showing its remarkable capability of representing much more than the classical bit could do. We then construct a variational quantum algorithm (VQA) to operate on these qubits to classify the board's score. The score is determined by the mobility (how many moves are possible) and material (how many pieces are there) of each player. The output of the circuit is received through a single measurement collapsing all the information of the circuit into a single classical bit. Repeating this measurement gives a good estimate of the board's value. To learn how to quantify the given board, we train the VQA using a special combination of gradient descent and reinforcement learning.

Presentation:

Medium article: Asking a Quantum Computer To Learn Chess

Source code:

QuantumChessEngine

[Power Up] Supervised learning with quantum enhanced feature spaces

Team Name: 10101

Project Description:

We implement two classifiers on classical data that is mapped non-linearly onto a 2-qubit hilbert space. The first classifier is a quantum variational classifier trained by stochastic gradient descent, this is followed with the implementation of a kernel based method. Both models are great examples of the quantum advantage afforded by two canonical methods in QML.

Our team is creating a Pennylane tutorial to showcase these implementations which are reproductions of the 2018 paper "Supervised learning with quantum enhanced feature spaces" by Havlicek et. al. We are also extending the number of qubits used and increasing the circuit depth of these two models in order to investigate the presence of the barren plateau problem as described in McClean et. al. 2018. The kernel method will be available for comparison to showcase the ability to avoid the barren plateau problem being a non-parametric method.

Source code:

https://github.com/ryanhill1/QHack-2021

Resource Estimate:

We are increasing the depth of the QVC model to reproduce the barren plateau problem. This will expectedly require a lot of processing capability, on the order of 20 qubits per layer at ~100 layers. We are currently using SV1 for simulation but are also considering QPUs.

Each training evaluation will require roughly 96 tasks and 100,000 shots. At $0.3 AWS credits per task and $0.00019 per shot, we estimate $47.8 AWS credits per training iteration. Our experimentations will require some number of trials to reach ideal results as well as additional training and testing. With an estimate of 50 trials of $47.8 per trial, we will need roughly $2390 AWS credits.

[Power Up] Quantum Optimal Subarchitecture Search (QOSE)

Team Name:

Many body system

Project Description:

Training a quantum classifier on a classical data set is no easy feat, and one of the key issues for getting good performance is choosing a suitable ansatz that fits the problem of interest well. Due to the absence of practical guidelines for circuit design, we aim to leverage the full software stack available to the QML practitioner to find the optimal circuit architecture by a tree-based metaheuristic. Inspired by an analogous approach in classical machine learning [1], we propose Quantum Optimal Subarchitecture Estimation, or QOSE.

By combining the the scalability of AWS with the versatility and performance of Xanadu's PennyLane, we can reliably train hundreds of different circuit architectures in parallel. In QOSE, we iteratively add layers to a simple base circuit, increasing the expressiveness as the circuit deepens. This construction process can be described by a directed graph in the form of a tree, where nodes correspond to circuits of depth d. For each node in the tree, we briefly train the corresponding circuit on a subset of the actual data to get a glimpse of the expected performance.

tree-example

To combat the exponential increase of the search space of possible circuit architectures, we use a tree-pruning algorithm that eliminates nodes based on each circuit's quality. This quality is determined by a metric that combines inference speed, accuracy and model size into a single scalar cost. Our code can be made embarrassingly parallel by evaluating the cost of each depth-d node concurrently.

Source code:

https://github.com/kmz4/QHACK2021/tree/main/src

Resource Estimate:

We plan to deploy our code on AWS in the upcoming days. The circuits we plan to run will be small, hence we will only require the localsimulator keeping costs reasonable.

  1. For this initial demonstration we limit our choice of classifier layers to the set: [nn-ZZ, X, Y]. There will be 6 possible embeddings.
  2. This means that at most (without pruning our search tree), we will need to calculate 6 * 3^d circuits. If for each circuit in the tree, we do a quick hyper-parameter search over optimal batch size and learning rate, this will add a factor of n_{batch sizes} * n_{learning rates} bringing the total number of circuits that have to be trained to N = 6 * 3^d * n_{batch sizes} * n_{learning rates}.
  3. Each circuit costs about 10-30 seconds to train, depending on the depth, hence for worst case we will need N * 30s of computing power.
    Zeichnung
  4. We propose to use MPI4PY to handle the parallelization on AWS for calculating individual circuits. Due to the embarassingly parallel nature of our problem we can create a massive speedup for calculating the W costs used to direct the tree search.

[1] Adrian de Wynter, An Approximation Algorithm for Optimal Subarchitecture Extraction, 2020. eprint:arXiv:2010.08512.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.