Abstract

An information-theory motivation for considering a contour-gain codebook structure is given and an iterative contour-gain vector quantizer (CGVQ) algorithm allowing optimization of the shape codebook for a fixed-gain codebook is described. Numerical results are presented for CGVQ encoding of first-, second-, and tenth-order Gauss-Markov sources, and a clear improvement over SGVQ (shape-gain vector quantizer) performance is demonstrated. Numerical results are also presented for CGVQ waveform encoding of speech, and again an improvement over SGVQ performance is demonstrated. The perceptual quality of the encoded speech was roughly equivalent for the two models.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call