Skip to content

ugail/A-Technical-Introduction-to-Quantum-Neural-Networks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

A Technical Introduction to Quantum Neural Networks

Companion code for the paper A Technical Introduction to Quantum Neural Networks (Ugail, 2026).

image

This repository contains a single self-contained Jupyter notebook, qnn_companion.ipynb, which builds a minimal quantum neural network on a synthetic binary classification task and runs four short controlled experiments that correspond to specific claims made in the paper. The notebook is pedagogical rather than competitive. It is not intended to demonstrate quantum advantage, and it is deliberately small enough to run to completion in a few minutes on a standard laptop CPU.

What the notebook does

The four experiments are organised around design choices that the paper argues are often underappreciated in QNN benchmarking.

Experiment A, data encoding (paper Section 4.1). The trainable circuit is held fixed and only the encoding is varied. Because the encoding fixes the Fourier spectrum available to a variational model, changing it changes the decision boundary the circuit can express, and this shows up clearly in achievable accuracy on the two-moons task.

Experiment B, depth against trainability (paper Sections 5 and 6). The circuit depth is swept while other factors are held fixed. The aim is not to reproduce the barren plateau phenomenon in full, which would need many more qubits and many more seeds, but to illustrate the lack-of-payoff side of the expressivity and trainability trade-off in a regime that fits in a notebook.

Experiment C, classical baselines (paper Section 8). The best QNN configuration is compared against a tuned kernel SVM and a small multilayer perceptron on the same data. The point is that fair comparisons require classical baselines that are tuned rather than strawmanned, and on low-dimensional classical data a modest classical model is hard to beat.

Experiment D, finite-shot estimation (paper Section 7). The shot budget for expectation-value estimation is varied across training. Under a generous training budget the three shot settings reach the same final accuracy on this task, but the loss trajectories are visibly noisier at lower shot counts, which illustrates how finite-shot variance is always paid somewhere, whether in accuracy or in training time.

Requirements

The notebook uses PennyLane with the Lightning backend for the quantum side, and scikit-learn, NumPy, and Matplotlib for data handling, baselines, and plots. A self-installing cell at the top of the notebook will pick up anything missing.

pennylane
pennylane-lightning
scikit-learn
numpy
matplotlib

Python 3.10 or later is recommended.

Running the notebook

On Colab

Open qnn_companion.ipynb in Google Colab and run all cells. The first cell will install PennyLane and the Lightning backend if they are not already present. No hardware access or API key is required.

Locally

git clone <this-repo>
cd <this-repo>
pip install pennylane pennylane-lightning scikit-learn numpy matplotlib
jupyter notebook qnn_companion.ipynb

Run the cells in order from the top. Total runtime is a few minutes on a modern laptop CPU.

Reproducibility

The notebook fixes RNG_SEED = 42 and uses the lightning.qubit device with adjoint differentiation throughout. Exact numerical results may vary slightly across PennyLane versions and across platforms because of differences in BLAS libraries and floating-point reduction order, but the qualitative conclusions of each experiment are stable across the versions we tested.

A note on frameworks

PennyLane was chosen for pedagogical convenience because its autodiff flow and plotting utilities are well suited to small teaching examples. The same experiments can be written in Qiskit using qiskit_machine_learning.neural_networks.EstimatorQNN for the quantum model together with Qiskit's parameter-shift gradients for training. The scientific content would be identical.

Scope and honest framing

The notebook supports four specific claims from the paper and nothing more. It does not argue that QNNs are better than classical neural networks on standard machine learning tasks, and nothing in the code should be read as evidence for or against that broader claim. The dataset is low-dimensional, the models are small, and the simulations are noiseless apart from the explicit finite-shot experiment. The value of the notebook is that it makes each of the four design-level points concrete and reproducible, so that readers entering the field can see the mechanisms discussed in the paper at work in code they can modify.

License

Released under the MIT License.

About

Companion notebook for A Technical Introduction to Quantum Neural Networks. Four small PennyLane experiments on encoding, depth and trainability, classical baselines, and finite-shot cost.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors