Abstract
Rate of convergence results are established for vector quantization. Convergence rates are given for an increasing vector dimension and/or an increasing training set size. In particular, the following results are shown for memoryless real-valued sources with bounded support at transmission rate R. (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical mean-square error (MSE) with respect to m training vectors, then its MSE for the true source converges in expectation and almost surely to the minimum possible MSE as O(/spl radic/(log m/m)). (2) The MSE of an optimal k-dimensional vector quantizer for the true source converges, as the dimension grows, to the distortion-rate function D(R) as O(/spl radic/(log k/k)). (3) There exists a fixed-rate universal lossy source coding scheme whose per-letter MSE on a real-valued source samples converges in expectation and almost surely to the distortion-rate function D(R) as O((/spl radic/(loglog n/log n)). (4) Consider a training set of n real-valued source samples blocked into vectors of dimension k, and a k-dimension vector quantizer designed to minimize the empirical MSE with respect to the m=[n/k] training vectors. Then the per-letter MSE of this quantizer for the true source converges in expectation and almost surely to the distortion-rate function D(R) as O(/spl radic/(log log n/log n))), if one chooses k=[(1/R)(1-/spl epsiv/)log n] for any /spl epsiv//spl isin/(0.1). >
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.