Abstract

We present a new lossy compression algorithm for statistical floating-point data through a representation learning with binary variables. The algorithm finds a set of basis vectors and their binary coefficients that precisely reconstruct the original data. The optimization for the basis vectors is performed classically, while binary coefficients are retrieved through both simulated and quantum annealing for comparison. A bias correction procedure is also presented to estimate and eliminate the error and bias introduced from the inexact reconstruction of the lossy compression for statistical data analyses. The compression algorithm is demonstrated on two different datasets of lattice quantum chromodynamics simulations. The results obtained using simulated annealing show 3–3.5 times better compression performance than the algorithm based on neural-network autoencoder. Calculations using quantum annealing also show promising results, but performance is limited by the integrated control error of the quantum processing unit, which yields large uncertainties in the biases and coupling parameters. Hardware comparison is further studied between the previous generation D-Wave 2000Q and the current D-Wave Advantage system. Our study shows that the Advantage system is more likely to obtain low-energy solutions for the problems than the 2000Q.

Highlights

  • IntroductionWe present a new lossy compression algorithm for statistical floating-point data through a representation learning with binary variables

  • We presented a new lossy compression algorithm for statistical data based on the representation learning with binary variables

  • The algorithm finds a set of basis vectors, which is common for all data, and their binary coefficients ( Nq ) that precisely reconstruct each D-dimensional input vector

Read more

Summary

Introduction

We present a new lossy compression algorithm for statistical floating-point data through a representation learning with binary variables. The optimization for the basis vectors is performed classically, while binary coefficients are retrieved through both simulated and quantum annealing for comparison. The results obtained using simulated annealing show 3–3.5 times better compression performance than the algorithm based on neural-network autoencoder. Modern lattice quantum chromodynamics (QCD) simulations targeting accurate precision generate O(PB) of ­data[1,2] and store the data on storage systems for long-term analysis. Quantum annealing is an approach to solving such optimization problems using adiabatic quantum computation (AQC). Quantum annealing is AQC with a relaxation of the adiabatic condition in an open system at finite t­emperature[19] It solves combinatorial optimization problems using the quantum effect tunneling through barriers between local m­ inima[20–23].

Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call