Abstract

Several tasks in quantum-information processing involve quantum learning. For example, quantum sensing, quantum machine learning, and quantum-computer calibration involve learning and estimating unknown parameters $\mathbit{\ensuremath{\theta}}=({\ensuremath{\theta}}_{1},\ensuremath{\cdots},{\ensuremath{\theta}}_{M})$ from measurements of many copies of a quantum state ${\stackrel{\ifmmode \hat{}\else \^{}\fi{}}{\ensuremath{\rho}}}_{\mathbit{\ensuremath{\theta}}}$. This type of metrological information is described by the quantum Fisher information matrix, which bounds the average amount of information learned about $\mathbit{\ensuremath{\theta}}$ per measurement of ${\stackrel{\ifmmode \hat{}\else \^{}\fi{}}{\ensuremath{\rho}}}_{\mathbit{\ensuremath{\theta}}}$. In several scenarios, it is advantageous to compress the multiparameter information encoded in ${\stackrel{\ifmmode \hat{}\else \^{}\fi{}}{\ensuremath{\rho}}}_{\mathbit{\ensuremath{\theta}}}{\phantom{\rule{0.16em}{0ex}}}^{\ensuremath{\bigotimes}n}$ into ${\stackrel{\ifmmode \hat{}\else \^{}\fi{}}{\ensuremath{\rho}}}_{\mathbit{\ensuremath{\theta}}}^{\text{ps}}\phantom{\rule{0.16em}{0ex}}{}^{\ensuremath{\bigotimes}m}$, where $m\ensuremath{\ll}n$. Here, we present a ``go-go'' theorem proving that $m/n$ can be made arbitrarily small, and that the information compression can happen without loss of information. We also demonstrate how to construct filters that perform this unbounded and lossless information compression. These filters can, for example, reduce arbitrarily the quantum-state intensity on experimental detectors, while retaining all initial information. Finally, we prove that the ability to compress quantum Fisher information is a nonclassical advantage that stems from the negativity of a particular quasiprobability distribution, a quantum extension of a probability distribution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call