Abstract

ABSTRACTThe relative entropy is compared with the previously used Shannon entropy difference as a measure of the amount of information extracted from observations by an optimal analysis in terms of the changes in the probability density function (pdf) produced by the analysis with respect to the background pdf. It is shown that the relative entropy measures both the signal and dispersion parts of the information content from observations, while the Shannon entropy difference measures only the dispersion part. When the pdfs are Gaussian or transformed to Gaussian, the signal part of the information content is given by a weighted inner-product of the analysis increment vector and the dispersion part is given by a non-negative definite function of the analysis and background covariance matrices. When the observation space is transformed based on the singular value decomposition of the scaled observation operator, the information content becomes separable between components associated with different singular values. Densely distributed observations can be then compressed with minimum information loss by truncating the components associate with the smallest singular values. The differences between the relative entropy and Shannon entropy difference in measuring information content and information loss are analysed in details and illustrated by examples.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.