Abstract

Uncertainty is a fundamental characteristic of quantum system. The degree of uncertainty of an observable has long been investigated by the standard deviation of the observable. In recent years, however, by analyzing some special examples, researchers have found that the Shannon entropy of the measurement outcomes of an observable is more suitable to quantify its uncertainty. Formally, Shannon entropy is a special limit of a more general Rényi entropy. In this paper, we discuss the problem of how to predict the measurement outcome of an observable by the existing measurement results of the observable, and how to quantitatively describe the uncertainty of the observable from the perspective of the repeatable probability of the measurement results of this observable in an unknown state. We will argue that if the same observable of different systems in the same state is repeatedly and independently measured many times, then the probability of obtaining an identical measurement result is a decaying function of the number of measurements of obtaining the same result, and the decay rate of the repeatable probability for obtaining the same measurement results and the repeatable number of measurements can represent the degree of uncertainty of the observable in this state. It means that the greater the uncertainty of an observable, the faster the repeatable probability decays with the number of repeatable measurements; conversely, the smaller the uncertainty, the slower the repeatable probability decays with the number of repeatable measurements. This observation enables us to give the Shannon entropy and the Rényi entropy of an observable uniformly by the functional relation between the repeatable probability and the number of repeatable measurements. We show that the Shannon entropy and the Rényi entropy can be formally regarded as the “decay index” of the repeatable probability with the number of repeatable measurements. In this way we also define a generalized Rényi entropy by the repeatable probability for consecutively observing identical results of an observable, and therefore we give a proof of the Maassen-Uffink type entropic uncertainty relation by using this generalized Rényi entropy. This method of defining entropy shows that entropic uncertainty relation is a quantitative limitation for the decay rate of the total probability for obtaining identical measurement results when we simultaneously measure two observables many times.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call