Abstract

The problem of vector quantizer empirical design for noisy channels or for noisy sources is studied. It is shown that the average squared distortion of a vector quantizer designed optimally from observing clean independent and identically distributed (i.i.d.) training vectors converges in expectation, as the training set size grows, to the minimum possible mean-squared error obtainable for quantizing the clean source and transmitting across a discrete memoryless noisy channel. Similarly, it is shown that if the source is corrupted by additive noise, then the average squared distortion of a vector quantizer designed optimally from observing i.i.d. noisy training vectors converges in expectation, as the training set size grows, to the minimum possible mean-squared error obtainable for quantizing the noisy source and transmitting across a noiseless channel. Rates of convergence are also provided.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call