Abstract

For efficient variable-rate speech coding, Karhunen-Loeve transform based adaptive entropy-constrained vector quantization (KLT-AECVQ) is proposed. The proposed method consists of backward-adaptive linear predictive coding (LPC) analysis, KLT estimation based on LPC coefficients, and lattice vector quantization followed by Huffman coding according to KLT statistics. As different statistics in an original-signal domain can be mapped into identical statistics in a KLT domain, only a few classified Huffman codebooks are sufficient to represent KLT-domain source statistics. KLT-AECVQ with 32 Huffman codebooks has comparable rate-distortion performance with theoretically optimal AECVQ with infinite number of Huffman codebooks. KLT-AECVQ also produces superior perceptual quality to KLT-based classified vector quantization (KLTCVQ) that yielded better quality than conventional code excited linear predictive (CELP) codec. Under five-sample delay constraints, KLT-AECVQ has also three times lower complexity than CELP codec.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.