Abstract

Multivariate information decompositions hold promise to yield insight into complex systems, and stand out for their ability to identify synergistic phenomena. However, the adoption of these approaches has been hindered by there being multiple possible decompositions, and no precise guidance for preferring one over the others. At the heart of this disagreement lies the absence of a clear operational interpretation of what synergistic information is. Here we fill this gap by proposing a new information decomposition based on a novel operationalisation of informational synergy, which leverages recent developments in the literature of data privacy. Our decomposition is defined for any number of information sources, and its atoms can be calculated using elementary optimisation techniques. The decomposition provides a natural coarse-graining that scales gracefully with the system’s size, and is applicable in a wide range of scenarios of practical interest.

Highlights

  • The familiarity with which we relate to the notion of “information” – due to its central role in our modern worldview – is at odds with the mysteries still surrounding some of its fundamental properties

  • For each Hamiltonian order k there is only one non-zero backbone atom, which suggests that I(X; Y ) ≈ B∂k(X → Y ). Note that this relationship between Hamiltonian interaction order and backbone atom is highly non-trivial, and finding analytical methods to make this connection more explicit is an open question. These findings suggest that the backbone decomposition may provide an analogue to the measure of connected information introduced in Refs. [10, 11], which captures the effects of Hamiltonian high-order terms over their corresponding Gibbs distributions [49]

  • Traditional Partial Information Decomposition (PID)-type decompositions for two sources are based on the following conditions: I(Xi; Y ) = Red(X1, X2 → Y ) + Un(Xi; Y |Xj) I(Xi; Y |Xj) = Un(Xi; Y |Xj) + Syn(X1, X2 → Y ), which are valid for i, j ∈ {1, 2} with i = j

Read more

Summary

INTRODUCTION

The familiarity with which we relate to the notion of “information” – due to its central role in our modern worldview – is at odds with the mysteries still surrounding some of its fundamental properties. Informational synergy has been studied following various approaches, including redundancy-synergy balances [5, 7,8,9], information geometry [10, 11], and others Within this literature, one of the most elegant and powerful proposals is the Partial Information Decomposition (PID) framework [12], which divides information into redundant (contained in every part of the system), unique (contained in only one part), and synergistic (contained in the whole, but not in any part) components. Most approaches to quantify synergy proceed by postulating axioms encoding some “intuitive” desiderata, which should ideally lead towards a unique measure – following the well-known Building on these remarks, we argue that measures of synergy with little concrete, operational meaning provide a limited advance from mere qualitative criteria.

SYNERGY AND DATA DISCLOSURE
Synergistic channels
Synergistic disclosure
Fundamental properties
INFORMATION DECOMPOSITION
The extended constraint lattice
THE BACKBONE DECOMPOSITION
The backbone constraint lattice
Backbone atoms
Examples
SYNERGISTIC CAPACITY AND PRIVATE SELF-DISCLOSURE
RELATIONSHIP WITH OTHER INFORMATION DECOMPOSITIONS
Axioms
General relationship with PID
Disclosure decomposition
Numerical comparisons with other PIDs
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call