Abstract

A. Summary The basic purpose of data compression is to massage a data stream to reduce the average bit rate required for transmission or storage by removing unwanted redundancy and/or unnecessary precision. A mathematical formulation of data compression providing figures of merit and bounds on optimal performance was developed by Shannon [1,2] both for the case where a perfect compressed reproduction is required and for the case where a certain specified average distortion is allowable. Unfortunately, however, Shannon's probabilistic approach requires advance precise knowledge of the statistical description of the process to be compressed — a demand rarely met in practice. The coding theorems only apply, or are meaningful, when the source is stationary and ergodic. We here present a tutorial description of numerous recent approaches and results generalizing the Shannon approach to unknown statistical environments. Simple examples and empirical results are given to illustrate the essential ideas.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.