Abstract
The examples in the previous chapter showed that relative information loss yields counter-intuitive results for many practical systems such as quantizers, center clippers, etc. Also PCA was shown not to be useful in information-theoretic terms, at least if we do not know anything about the input data. All these results can be traced back to the fact that relative information loss treats every bit of information contained in the input RV X equally. In this chapter, we introduce the notion of relevant information loss: Not all the information at the input of a system is important, but only the part that is statistically related to a relevant RV. After defining relevant information loss and discussing its properties in Sects. 5.1 and 5.2 shows that the problem of minimizing relevant information loss is related to the information bottleneck problem. With the help of relevant information loss we then justify PCA from an information-theoretic perspective in Sect. 5.3.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.