Abstract
A distribution that maximizes an entropy can be found by applying two different principles. On the one hand, Jaynes (1957a,b) formulated the maximum entropy principle (MaxEnt) as the search for a distribution maximizing a given entropy under some given constraints. On the other hand, Kapur (1994) and Kesavan and Kapur (1989) introduced the generalized maximum entropy principle (GMaxEnt) as the derivation of an entropy for which a given distribution has the maximum entropy property under some given constraints. In this paper, both principles were considered for cumulative entropies. Such entropies depend either on the distribution function (direct), on the survival function (residual) or on both (paired). We incorporate cumulative direct, residual, and paired entropies in one approach called cumulative entropies. Maximizing this entropy without any constraints produces an extremely U-shaped (=bipolar) distribution. Maximizing the cumulative entropy under the constraints of fixed mean and variance tries to transform a distribution in the direction of a bipolar distribution, as far as it is allowed by the constraints. A bipolar distribution represents so-called contradictory information, which is in contrast to minimum or no information. In the literature, to date, only a few maximum entropy distributions for cumulative entropies have been derived. In this paper, we extended the results to well known flexible distributions (like the generalized logistic distribution) and derived some special distributions (like the skewed logistic, the skewed Tukey and the extended Burr XII distribution). The generalized maximum entropy principle was applied to the generalized Tukey distribution and the Fechner family of skewed distributions. Finally, cumulative entropies were estimated such that the data was drawn from a maximum entropy distribution. This estimator will be applied to the daily S&P500 returns and time durations between mine explosions.
Highlights
For a continuous random variable with density f, the classical differential (Shannon) entropy is defined by Z ES ( f ) = −f ( x ) ln f ( x )dx. (1)Maximizing (1) with respect to f under the constraint of observed power or L-moments gives maximum entropy (ME) densities
The generalized maximum entropy principle was applied to the generalized Tukey λ distribution and the Fechner family of skewed distributions
An extremely bimodal (=bipolar) distribution represents a situation of so-called contradictory information since an event and its complement can happen with equal probability
Summary
For a continuous random variable with density f , the classical differential (Shannon) entropy is defined by. When only considering a random variable with non-negative support and for the ME quantile function Q holds Q(0) = 0, maximizing CRES or CDES gives a distribution which is no longer U-shaped, and the maximum entropy situation no longer corresponds with contradictory information. We illustrate this in Example 5 using a special beta distribution. The question we raised in the section title on what maximizing cumulative direct, residual, and paired Shannon entropies means can be answered by the conclusion that maximizing these entropies leads to a more or less skewed U-shaped distribution as long as there are no special constraints (like Q(0) = 0) which are able to prevent this This U-shaped distribution corresponds to contradictory information. We derive two general formulas for ME quantile functions under some restrictions
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.