Abstract

The kernel mean embedding of probability distributions is commonly used in machine learning as an injective mapping from distributions to functions in an infinite dimensional Hilbert space. It allows us, for example, to define a distance measure between probability distributions, called maximum mean discrepancy (MMD). In this work, we propose to represent probability distributions in a pure quantum state of a system that is described by an infinite dimensional Hilbert space. This enables us to work with an explicit representation of the mean embedding, whereas classically one can only work implicitly with an infinite dimensional Hilbert space through the use of the kernel trick. We show how this explicit representation can speed up methods that rely on inner products of mean embeddings and discuss the theoretical and experimental challenges that need to be solved in order to achieve these speedups.

Highlights

  • In machine learning, kernel methods are used to implicitly evaluate inner products in high-dimensional feature spaces

  • We identify methods involving the kernel mean embedding [14,15,16] as a branch of machine-learning techniques that suffer from the fact that on a classical computer, the cost of the evaluation of inner products of sums of feature maps is not independent of the number of data points involved

  • We adapted the concept of kernel mean embeddings to quantum mechanics, by defining the quantum mean embedding

Read more

Summary

INTRODUCTION

Kernel methods are used to implicitly evaluate inner products in high-dimensional feature spaces. Instead of evaluating inner products explicitly in the feature space, a more efficient evaluation can be done implicitly in the original space using a positive-definite kernel function This is known as the kernel trick [4]. Most kernel-based methods scale polynomially with the size of the data sets This problem has been tackled in the realm of quantum computation and exponential speedups have been conjectured [5,6]. We identify methods involving the kernel mean embedding [14,15,16] as a branch of machine-learning techniques that suffer from the fact that on a classical computer, the cost of the evaluation of inner products of sums of feature maps is not independent of the number of data points involved. We sum up with a discussion of our results

KERNEL MEAN EMBEDDING
QUANTUM MEAN EMBEDDING
CHALLENGES
CONCLUSION
Coherent states and Gaussian kernel
Estimation of NX
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call