Abstract

We propose a regression algorithm that utilizes a learned dictionary optimized for sparse inference on a D-Wave quantum annealer. In this regression algorithm, we concatenate the independent and dependent variables as a combined vector, and encode the high-order correlations between them into a dictionary optimized for sparse reconstruction. On a test dataset, the dependent variable is initialized to its average value and then a sparse reconstruction of the combined vector is obtained in which the dependent variable is typically shifted closer to its true value, as in a standard inpainting or denoising task. Here, a quantum annealer, which can presumably exploit a fully entangled initial state to better explore the complex energy landscape, is used to solve the highly non-convex sparse coding optimization problem. The regression algorithm is demonstrated for a lattice quantum chromodynamics simulation data using a D-Wave 2000Q quantum annealer and good prediction performance is achieved. The regression test is performed using six different values for the number of fully connected logical qubits, between 20 and 64. The scaling results indicate that a larger number of qubits gives better prediction accuracy.

Highlights

  • Sparse coding refers to a class of unsupervised learning algorithms for finding an optimized set of basis vectors, or dictionary, for accurately reconstructing inputs drawn from any given dataset using the fewest number of nonzero coefficients

  • We developed a mapping of the a(k)-optimization in Eq (1) to the quadratic unconstrained binary optimization (QUBO) problem that can be solved on a quantum annealer and demonstrated its feasibility on the D-Wave s­ ystems[9,10,11]

  • We proposed a regression algorithm using sparse coding dictionary learning that can be implemented on a quantum annealer, based on the formulation of a regression as an inpainting problem

Read more

Summary

Introduction

Sparse coding refers to a class of unsupervised learning algorithms for finding an optimized set of basis vectors, or dictionary, for accurately reconstructing inputs drawn from any given dataset using the fewest number of nonzero coefficients. Optimizing a dictionary φ ∈ RM×Nq for a given dataset and inferring optimal sparse representations a(k) ∈ RNq of input data X(k) ∈ RM involves finding solutions of the following minimization problem: min min φ k a(k). By mapping the sparse coding to a QUBO structure, the sparse coefficients are restricted to binary variables ai ∈ {0, 1} , and it makes the L0-norm equivalent to the L1-norm Despite this restriction, it was able to provide good sparse representation for the the ­MNIST9,11,15 and CIFAR-1010,16 images.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call