Abstract

The paper describes a high quality, low bit rate (about 2.4 k bits per sec) speech coder which uses adaptive wavelets to model speech. The optimal wavelet parameters which correspond to the speech model that produced a particular sound are obtained using a feed forward neural network. These parameters are then quantized using scalar and vector quantization (VQ) techniques to reduce the number of bits required to transmit a speech signal. Both these quantizers are described in the paper. The bit rate i.e., the number of bits per second of speech for both the quantizers is computed and it is shown that the bit rate can be reduced significantly using a VQ method. The comparative perception results that are obtained by listening to the synthesized speech using both scaler and vector quantized wavelet parameters are reported. In addition, the change in error obtained by varying the number of wavelets to represent a speech signal is analyzed. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call