Abstract
Probability distributions can be read as simple expressions of information. Each continuous probability distribution describes how information changes with magnitude. Once one learns to read a probability distribution as a measurement scale of information, opportunities arise to understand the processes that generate the commonly observed patterns. Probability expressions may be parsed into four components: the dissipation of all information, except the preservation of average values, taken over the measurement scale that relates changes in observed values to changes in information, and the transformation from the underlying scale on which information dissipates to alternative scales on which probability pattern may be expressed. Information invariances set the commonly observed measurement scales and the relations between them. In particular, a measurement scale for information is defined by its invariance to specific transformations of underlying values into measurable outputs. Essentially all common distributions can be understood within this simple framework of information invariance and measurement scale.
Highlights
Patterns of nature often follow probability distributions
What is the distribution of errors in measurements? How do average values in samples vary around the true mean value? In these cases, we may describe the intrinsic variability by the variance
Finding the measurement scale and the associated constraint that lead to a particular form for a distribution is useful, because the constraint concisely expresses the information in a probability pattern [4,6]
Summary
Patterns of nature often follow probability distributions. Physical processes lead to an exponential distribution of energy levels among a collection of particles. Economic patterns of income typically match variants of the Pareto distributions with power law tails Theories in those different disciplines attempt to fit observed patterns to an underlying generative process. How much do we really learn by this inverse problem, in which we start with an observed distribution of outcomes and try to infer underlying process? The central limit theorem is, in essence, the statement that adding up all sorts of different independent processes often leads to a Gaussian distribution of fluctuations about the mean value. The commonly observed patterns are common because they are consistent with so many different underlying processes and initial conditions. The common patterns are difficult with regard to the inverse problem of going from observed distributions to inferences about underlying generative processes. How can we learn to read a mathematical expression of a probability pattern as a statement about the family of underlying processes that may generate it?
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.