Abstract

The ability of a neural network to learn on-line is crucial for real time speech recognition systems. In fact, analog neural network systems are preferred to their digital counterparts mainly due to the high speed that they can attain. However, the training method adopted also affects the performance of the neural network. The conventional error backpropagation network usually requires quite a long convergence time for correct weight adjustment since the sigmoid function of a conventional multilayer network gives a smooth response over a wide range of input values. In contrast, the Gaussian function responds significantly only to local regions of the space of input values. Thus, backpropagation training is more efficient in neural networks based on Gaussian functions or radial basis function (RBF) networks, than those based on sigmoid functions in the hidden layer. The paper proposes an analog VLSI chip, which can be cascaded in order to develop an RBF neural network system for phoneme recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.