Abstract

Phonetic classification of speech frames allows distinctive quantization and bit allocation schemes suited to the particular class. Separate quantization of the linear predictive coding (LPC) parameters for voiced and unvoiced speech frames is shown to offer useful gains for representing the synthesis filter commonly used in code-excited linear prediction (CELP) and other coders. Subjective test results are reported that determine the required bit rate and accuracy in the two classes of voiced and unvoiced LPC spectra for CELP coding with phonetic classification. It was found, in this context, that unvoiced spectra need 9 b/frame or more whereas voiced spectra need 25 b/frame or more with the quantization schemes used. New spectral distortion criteria needed to assure transparent LPC spectral quantization for each voicing class in CELP coders are presented. Similar subjective test results for speech synthesized from the true residual signal are also presented, leading to some interesting observations on the role of the analysis-by-synthesis structure of CELP. Objective performance assessments based on the spectral distortion measure are also presented. The theoretical distortion-rate function for the spectral distortion measure is estimated for voiced and unvoiced LPC parameters and compared with experimental results obtained with unstructured vector quantization (VQ). These results show a saving of at least 2 b/frame for unvoiced spectra compared to voiced spectra to achieve the same spectral distortion performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call