Abstract

In this paper we show how the Shannon entropy is connected to the theory of majorization. They are both linked to the measure of disorder in a system. However, the theory of majorization usually gives stronger criteria than the entropic inequalities. We give some generalized results for majorization inequality using Csiszár f-divergence. This divergence, applied to some special convex functions, reduces the results for majorization inequality in the form of Shannon entropy and the Kullback-Leibler divergence. We give several applications by using the Zipf-Mandelbrot law.

Highlights

  • Introduction and preliminariesWell over a century ago measures were derived for assessing the distance between two models of probability distributions

  • In Section, we present our main generalized results obtained from majorization inequality by using Csiszár f -divergence and obtain corollaries in the form of Shannon entropy and the K-L distance

  • The following theorem is the connection between Csiszár f -divergence and weighted majorization inequality as one sequence is monotonic

Read more

Summary

Then by the majorization theorem

It is generally common to take log with a base of in the introduced notions, but in our investigations this is not essential. In Section , we present our main generalized results obtained from majorization inequality by using Csiszár f -divergence and obtain corollaries in the form of Shannon entropy and the K-L distance. Definition Let f : R+ → R+ be a convex function, and let p := It is possible to use non-negative probability distributions in the f -divergence functional, by defining f ( ) := lim f (t); t→ +. Considered functionality based on the previous definition. Definition Let J ⊂ R be an interval, and let f : J → R be a function.

Then we denote n
We use Theorem
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call