Abstract

When we only have partial information about the probability distribution, i.e., when several different probability distributions are consistent with our knowledge, then it makes sense to select a distribution with the largest entropy. In particular, when we only know that the quantity is located within a certain interval – and we have no information about the probability of different values within this intervals – then it is reasonable to assume that all these values are equally probable, i.e., that we have a uniform distribution on this interval. The problem with this idea is that if we apply it to the same quantity after a non-linear rescaling, we get a different (non-uniform) distribution in the original scale. In other words, it seems that the results of applying the Maximum Entropy approach are rather arbitrary: they depend on what exactly scale we apply them to. In this paper, we show how to overcome this subjectivity: namely, we propose to take into account that, due to measurement inaccuracy, we always have finitely many possible measurement results, and this finiteness makes the results of applying the Maximum Entropy approach uniquely determined. 1 Maximum Entropy Approach and Its Limitations Need to describe probabilities. One of the main objectives of science is to predict future events based on the available information. In many practical situations, it is not possible to uniquely predict the future events: there are many factors which are difficult to take into account. For example, while we can predict tomorrow’s weather reasonably well, these predictions are not exact. In such situations, when we know that for the same future quantity, several different values are possible, it is desirable to describe the frequency of different possible values, i.e., to describe the probability distribution on the set of all

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call