Abstract

The development of quantum-classical hybrid (QCH) algorithms is critical to achieve state-of-the-art computational models. A QCH variational autoencoder (QVAE) was introduced in reference [] by some of the authors of this paper. QVAE consists of a classical auto-encoding structure realized by traditional deep neural networks to perform inference to and generation from, a discrete latent space. The latent generative process is formalized as thermal sampling from a quantum Boltzmann machine (QBM). This setup allows quantum-assisted training of deep generative models by physically simulating the generative process with quantum annealers. In this paper, we have successfully employed D-Wave quantum annealers as Boltzmann samplers to perform quantum-assisted, end-to-end training of QVAE. The hybrid structure of QVAE allows us to deploy current-generation quantum annealers in QCH generative models to achieve competitive performance on datasets such as MNIST. The results presented in this paper suggest that commercially available quantum annealers can be deployed, in conjunction with well-crafted classical deep neutral networks, to achieve competitive results in unsupervised and semisupervised tasks on large-scale datasets. We also provide evidence that our setup is able to exploit large latent-space QBMs, which develop slowly mixing modes. This expressive latent space results in slow and inefficient classical sampling and paves the way to achieve quantum advantage with quantum annealing in realistic sampling applications.

Highlights

  • Current-generation quantum annealers physically simulate a transverse-field Ising model and operate in interaction with in a thermal environment

  • We found that larger restricted BMs (RBMs) did not appreciably improve performance of the overall VAE when training on the MNIST dataset

  • We have shown in the previous sections we have successfully trained large RBM in the latent space of a VAE solely using samples coming from a D-Wave 2000Q, without using any hard-coded pre or post-processing to the raw data obtained from the annealer

Read more

Summary

VARIATIONAL AUTOENCODERS

We will briefly introduce VAEs and describe their extension to discrete latent variables; a necessary step to hybridize with quantum priors and to perform quantum-assisted training. The goal is to train a probabilistic model such that the model distribution pθ(X) (where θ are the parameters of the model) is as close as possible to the data distribution, pdata(X), which is unknown but assumed to exist. Generative models with latent variables can potentially learn and encode useful representations of the data in the latent space. This is an important property that can be exploited in many practical applications [61,62,63,64] to improve other tasks such as supervised and semi-supervised learning [65]. The marginal pθ(x|ζ) and approximating posterior qφ(ζ|x), called “decoder” and “encoder” respectively, are commonly parameterized using deep neural networks

Variational inference
The reparameterization trick
VAE with discrete latent variables
Hybridization with quantum prior
SAMPLING WITH QUANTUM ANNEALERS
TRAINING VAE WITH QUANTUM ANNEALERS
Validation of training
A PATH TOWARDS QUANTUM ADVANTAGE WITH VAE
Exploit large latent-space RBMs
Denser connectivities
Hardware-specific optimization of classical networks
Training on larger datasets
Multi-modality of latent-space RBMs
Robustness to noise and control errors
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call