Abstract

To design data processing algorithms with the smallest processing time, we need to know what this average stands for. At first glance, it may seem that real-life data are really chaotic, and no probabilities are possible at all: today, we may apply our software package to elementary particles, tomorrow - to distances between the stars, etc. However, contrary to this intuitive feeling, there are stable probabilities in real-life data. This fact was first discovered in 1881 by Simon Newcomb who noticed that the first pages of logarithm tables (that contain numbers starting with 1) are more used than the last ones (that contain numbers starting with 9). To check why, he took all physical constants from a reference book, and counted how many of them start with 1. An intuitive expectation is that all 9 digits should be equally probable. In reality, instead of 11 %, about 30% of these constants turned out to be starting with 1. In general, the fraction or constants that start with a digit d can be described as ln(d + 1) - ln(d). We describe a new interval computations-related explanation for this empirical fact, and we explain its relationship with lifetime of the Universe and with the general problem of determining subjective (fuzzy) probabilities on finite and infinite intervals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call