Abstract

We discuss a one-parameter family of generalized cross entropy between two distributions with the power index, called the projective power entropy. The cross entropy is essentially reduced to the Tsallis entropy if two distributions are taken to be equal. Statistical and probabilistic properties associated with the projective power entropy are extensively investigated including a characterization problem of which conditions uniquely determine the projective power entropy up to the power index. A close relation of the entropy with the Lebesgue space Lp and the dual Lq is explored, in which the escort distribution associates with an interesting property. When we consider maximum Tsallis entropy distributions under the constraints of the mean vector and variance matrix, the model becomes a multivariate q-Gaussian model with elliptical contours, including a Gaussian and t-distribution model. We discuss the statistical estimation by minimization of the empirical loss associated with the projective power entropy. It is shown that the minimum loss estimator for the mean vector and variance matrix under the maximum entropy model are the sample mean vector and the sample variance matrix. The escort distribution of the maximum entropy distribution plays the key role for the derivation.

Highlights

  • In the classical statistical physics and the information theory the close relation withBoltzmann-Shannon entropy has been well established to offer elementary and clear understandings.The Kullback-Leibler divergence is directly connected with maximum likelihood, which is one of the most basic ideas in statistics

  • The escort distribution of the maximum entropy distribution plays the key role for the derivation

  • We explore a close relation of the statistical model and the estimation method

Read more

Summary

Introduction

In the classical statistical physics and the information theory the close relation with. We consider generalized entropy and divergence defined on the space of density functions with finite mass,. See [13,14] for the information geometry and statistical applications for the independent component analysis and pattern recognition Note that this is defined in the continuous case for probability density functions, but can be reduced to a discrete case, see Tsallis [2] for the extensive discussion on statistical physics. The loss function associated with the projective power entropy Cγ (g, f (·, θ)) based on the sample is given by kγ (θ)f (xi , θ)γ. We discuss the model of maximum entropy distributions, called the γ-model, in which 0-model and 2-model equal Gaussian and Wigner models, respectively.

Projective Invariance
Model of Maximum Entropy Distributions
Concluding Remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call