Abstract

The measure of Jensen–Fisher divergence between probability distributions is introduced and its theoretical grounds set up. This quantity, in contrast to the remaining Jensen divergences, grasps the fluctuations of the probability distributions because it is controlled by the (local) Fisher information, which is a gradient functional of the distribution. So it is appropriate and informative when studying the similarity of distributions, mainly for those having oscillatory character. The new Jensen–Fisher divergence shares with the Jensen–Shannon divergence the following properties: non-negativity, additivity when applied to an arbitrary number of probability densities, symmetry under exchange of these densities, vanishing under certain conditions and definiteness even when these densities present non-common zeros. Moreover, the Jensen–Fisher divergence is shown to be expressed in terms of the relative Fisher information as the Jensen–Shannon divergence does in terms of the Kullback–Leibler or relative Shannon entropy. Finally, the Jensen–Shannon and Jensen–Fisher divergences are compared for the following three large, non-trivial and qualitatively different families of probability distributions: the sinusoidal, generalized gamma-like and Rakhmanov–Hermite distributions, which are closely related to the quantum-mechanical probability densities of numerous physical systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call