Abstract

Our previous work has indicated that multilayer perceptrons trained using the backpropagation algorithm, have great difficulty in learning continuous mappings with sufficient accuracy for speech synthesis. The use of vector quantization allows networks to be trained to select a sequence of entries from a codebook of speech parameter vectors. For the network to be able to generalise meaningfully some correlation must exist between codebook vectors and the indices by which they are recalled (otherwise the network will be attempting to learn an essentially random mapping). This paper describes the use of the Hamming learning vector quantizer (H-LVQ), which is used to generate a codebook of speech vectors in which such a correlation exists.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call