Abstract

For flexible speech coding, a Karhunen-Loève Transform (KLT) based adaptive entropy-constrained quantization (KLT-AECQ) method is proposed. It is composed of backward-adaptive linear predictive coding (LPC) estimation, KLT estimation based on the time-varying LPC coefficients, scalar quantization of the speech signal in a KLT domain, and superframe-based universal arithmetic coding based on the estimated KLT statistics. To minimize the outliers both in rate and distortion, a new distortion criterion includes the penalty in the rate increase. Gain adaptive step size selection and bounded Gaussian source model also cooperate to increase the perceptual quality. KLT-AECQ does not require either any explicit codebook or a training step, thus KLT-AECQ can have an infinite number of rate-distortion operating points regardless of time-varying source statistics. For the speech signal, the conventional KLT-based classified vector quantization (KLT-CVQ) and the proposed KLT-AECQ yield signal-to-noise ratios of 17.86 and 26.22, respectively, at around 16 kbits/s. The perceptual evaluation of speech quality (PESQ) scores for each method are 3.87 and 4.04, respectively <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">.</sup>

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.