Abstract

The LPC normalized error provides a measure of the success of linear prediction analysis in modeling a speech signal. Very little is known about the variation of the normalized LPC error as a function of the position of the analysis frame. In this talk we show that the LPC normalized error shows a substantial sample-to-sample variation for voiced speech in all three LPC analysis methods—i.e., the covariance method, the autocorrelation method, and the lattice method. The implication of this result is that standard methods of LPC analysis are often inadequate in that the error signal is uniformly sampled at a low rate (on the order of 100 Hz) leading to aliased results. For applications such as word recognition with frame-to-frame distance calculations using the normalized error [Itakura, IEEE Trans. Acoust. Speech Signal Process, Feb. (1975)], the errors due to uniform sampling can be severe. For speech synthesis applications the effect of uniform sampling of the error signal is a small, but noticeable roughness in the synthetic speech. Various strategies for minimizing the aliasing will be discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call