Abstract

Context-based lossless coding suffers in many cases from the so-called context dilution problem, which arises when, in order to model high-order statistic dependencies among data, a large number of contexts is used. In this case the learning process cannot be fed with enough data, and so the probability estimation is not reliable. To avoid this problem, state-of-the-art algorithms for lossless image coding resort to context quantization (CQ) into a few conditioning states, whose statistics are easier to estimate in a reliable way. It has been early recognized that in order to achieve the best compression ratio, contexts have to be grouped according to a maximal mutual information criterion. This leads to quantization algorithms which are able to determine a local minimum of the coding cost in the general case, and even the global minimum in the case of binary-valued input. This paper surveys the CQ problem and provides a detailed analytical formulation of it, allowing to shed light on some details of the optimization process. As a consequence we find that state-of-the-art algorithms have a suboptimal step. The proposed approach allows a steeper path toward the cost function minimum. Moreover, some sufficient conditions are found that allow to find a globally optimal solution even when the input alphabet is not binary. Even though the paper mainly focuses on the theoretical aspects of CQ, a number of experiments to validate the proposed method have been performed (for the special case of segmentation map lossless coding), and encouraging results have been recorded.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.