Abstract

We investigate the quantum effect on machine learning (ML) models exemplified by the Generative Adversarial Network (GAN), which is a promising deep learning framework. In the general GAN framework the generator maps uniform noise to a fake image. In this study, we utilize the Associative Adversarial Network (AAN), which consists of a standard GAN and an associative memory. Further, we set a Boltzmann Machine (BM), which is an undirected graphical model that learns low-dimensional features extracted from a discriminator, as the memory. Owing to difficulty calculating the BM's log-likelihood gradient, it is necessary to approximate it by using the sample mean obtained from the BM, which has tentative parameters. To calculate the sample mean, a Markov Chain Monte Carlo (MCMC) is often used. In a previous study, this was performed using a quantum annealer device, and the performance of the "Quantum" AAN was compared to that of the standard GAN. However, its better performance than the standard GAN is not well understood. In this study, we introduce two methods to draw samples: classical sampling via MCMC and quantum sampling via quantum Monte Carlo (QMC) simulation, which is quantum simulation on the classical computer. Then, we compare these methods to investigate whether quantum sampling is advantageous. Specifically, calculating the discriminator loss, the generator loss, inception score and Fr\'echet inception distance, we discuss the possibility of AAN. We show that the AANs trained by both MCMC and QMC are more stable during training and produce more varied images than the standard GANs. However, the results indicate no difference in sampling by QMC simulation compared to that by MCMC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call