Abstract

Compared to scalar quantization (SQ), vector quantization (VQ) has memory, space-filling, and shape advantages. If the signal statistics are known, direct vector quantization (DVQ) according to these statistics provides the highest coding efficiency, but requires unmanageable storage requirements if the statistics are time varying. In code-excited linear predictive (CELP) coding, a single compromise codebook is trained in the excitation-domain and the space-filling and shape advantages of VQ are utilized in a nonoptimal, average sense. In this paper, we propose Karhunen-Loe/spl grave/ve transform (KLT)-based adaptive classified VQ (CVQ), where the space-filling advantage can be utilized since the Voronoi-region shape is not affected by the KLT. The memory and shape advantages can be also used, since each codebook is designed based on a narrow class of KLT-domain statistics. We further improve basic KLT-CVQ with companding. The companding utilizes the shape advantage of VQ more efficiently. Our experiments show that KLT-CVQ provides a higher SNR than basic CELP coding, and has a computational complexity similar to DVQ and much lower than CELP. With companding, even single-class KLT-CVQ outperforms CELP, both in terms of SNR and codebook search complexity.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call