Abstract

In physics, communication theory, engineering, statistics, and other areas, one of the methods of deriving distributions is the optimization of an appropriate measure of entropy under relevant constraints. In this paper, it is shown that by optimizing a measure of entropy introduced by the second author, one can derive densities of univariate, multivariate, and matrix-variate distributions in the real, as well as complex, domain. Several such scalar, multivariate, and matrix-variate distributions are derived. These include multivariate and matrix-variate Maxwell–Boltzmann and Rayleigh densities in the real and complex domains, multivariate Student-t, Cauchy, matrix-variate type-1 beta, type-2 beta, and gamma densities and their generalizations.

Highlights

  • The following notations will be used in this paper: Real scalar variables, whether mathematical variables or random variables, will be denoted by lower-case letters, such as x, y, etc.; real vector/matrix variables—mathematical and random—will be denoted by capital letters, such as X, Y, etc

  • The term “entropy” is used as a mathematical measure of uncertainty or information characterized by some basic axioms, as illustrated by [3]. It is a functional resulting from a set of axioms, that is, a function that can be interpreted in terms of a statistical density in the continuous case and in terms of multinomial probabilities in the discrete case

  • The present paper is about one entropy measure on a real scalar variable, its generalizations to vector/matrix variables in the real and complex domains, and an illustration of how this entropy can be optimized under various constraints to derive various statistical densities in the scalar, vector, and matrix variables in the real and complex domains

Read more

Summary

Introduction

The following notations will be used in this paper: Real scalar variables, whether mathematical variables or random variables, will be denoted by lower-case letters, such as x, y, etc.; real vector/matrix variables—mathematical and random—will be denoted by capital letters, such as X, Y, etc. The term “entropy” is used as a mathematical measure of uncertainty or information characterized by some basic axioms, as illustrated by [3] It is a functional resulting from a set of axioms, that is, a function that can be interpreted in terms of a statistical density in the continuous case and in terms of multinomial probabilities in the discrete case. The present paper is about one entropy measure on a real scalar variable, its generalizations to vector/matrix variables in the real and complex domains, and an illustration of how this entropy can be optimized under various constraints to derive various statistical densities in the scalar, vector, and matrix variables in the real and complex domains. The quantity inside the expectation operator is an approximation to − η1 ln f ( X )

Optimization of Mathai’s Entropy for the Real Scalar Case
Evaluation of the Normalizing Constants
Real Matrix-Variate Case
Constraints in Terms of Determinants
Modification of the Constraint in Terms of a Determinant
Arbitrary Moments
Complex Case
Optimization with a Trace Constraint
Concluding Remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call