Abstract

We propose that entropy is a universal co-homological class in a theory associated to a family of observable quantities and a family of probability distributions. Three cases are presented: (1) classical probabilities and random variables; (2) quantum probabilities and observable operators; (3) dynamic probabilities and observation trees. This gives rise to a new kind of topology for information processes, that accounts for the main information functions: entropy, mutual-informations at all orders, and Kullback–Leibler divergence and generalizes them in several ways. The article is divided into two parts, that can be read independently. In the first part, the introduction, we provide an overview of the results, some open questions, future results and lines of research, and discuss briefly the application to complex data. In the second part we give the complete definitions and proofs of the theorems A, C and E in the introduction, which show why entropy is the first homological invariant of a structure of information in four contexts: static classical or quantum probability, dynamics of classical or quantum strategies of observation of a finite system.

Highlights

  • Three cases are presented: (1) classical probabilities and random variables; (2) quantum probabilities and observable operators; (3) dynamic probabilities and observation trees. This gives rise to a new kind of topology for information processes, that accounts for the main information functions: entropy, mutual-informations at all orders, and Kullback–Leibler divergence and generalizes them in several ways

  • We suggest that all information quantities are of co-homological nature, in a setting which depends on a pair of categories; one for the data on a system, like random variables or functions of solutions of an equation, and one for the parameters of this system, like probability laws or coefficients of equations; the first category generates an algebraic structure like a monoid, or more generally a monad, and the second category generates a representation of this structure, as do for instance conditioning, or adding new numbers; information quantities are co-cycles associated with this module

  • We call random variables (r.v) on a finite set Ω congruent when they define the same partition (remind that a partition of Ω is a family of disjoint non-empty subsets covering Ω and that the partition associated to a r.v X is the family of subsets Ωx of Ω defined by the equations X(ω) = x); the join r.v Y Z, denoted by (Y, Z), corresponds to the less fine partition that is finer than Y and Z

Read more

Summary

Information Homology

It corresponds to a standard non-homogeneous bar complex (cf [5]) Another co-boundary operator on CN is δt (t for twisted or trivial action or topological complex), that is defined by the above formula with the first term S0.F (S1; ...; SN ; P) replaced by F (S1; ...; SN ; P). (cf Theorem 1 section 2.3, [7]): For the full simplex ∆(Ω), and if S is the monoid generated by a set of at least two variables, such that each pair takes at least four values, the information co-homology space of degree one is one-dimensional and generated by the classical entropy. The absolute minima of I3 correspond to Borromean links, interpreted as synergy, cf. [11,12]

Extension to Quantum Information
Concavity and Convexity Properties of Information Quantities
Monadic Cohomology of Information
The Forms of Information Strategies
Conclusion and Perspective
Information Structures and Probability Families
Non-Homogeneous Information Co-Homology
Entropy
Quantum Information and Projective Geometry
Quantum Information Structures and Density Functors
Quantum Information Homology
Structure of Observation of a Finite System
Problems of Discrimination
Co-Homology of Observation Strategies
Arborescent Mutual Information
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call