Abstract

This paper proposes an effective mechanism for stochastic codebook generation for lossy coding, using source examples. Earlier work has shown that the rate-distortion bound can be asymptotically achieved by a natural type selection (NTS) mechanism which iteratively considers asymptotically long source strings (from given distribution P) and regenerates the codebook according to the type of the first codeword to dmatch the source string (i.e., satisfy the distortion constraint), where the sequence of codebook generating types converges to the optimal reproduction distribution. While ensuring optimality, earlier results had a significant practical flaw, due to the order of limits at which the convergence is achieved. More specifically, NTS iterations indexed by n presume asymptotically large, but the codebook size grows exponentially with l. The reversed order of limits is practically preferred, wherein most codebook regeneration iterations involve manageable string lengths. This work describes a dramatically more efficient mechanism to achieve the optimum within a practical framework. It is specifically shown that it is sufficient to individually encode many source strings of short fixed length l, then find the maximum likelihood estimate for the distribution Q n+1 that would have generated the observed sequence of d-matching codeword strings, then use Q n+1 to generate a new codebook for the next iteration. The sequence of distributions Q 1 ,Q 2 ,… converges to the optimal reproduction distribution $Q_\ell ^{\ast}(P,d)$, achievable at finite length l. It is further shown that $Q_\ell ^{\ast}(P,d)$ converges to the optimal reproduction distribution Q∗(P,d) that achieves the rate-distortion bound R(P,d), asymptotically in string length l.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call