Abstract

AbstractThe neural prediction model (NPM) proposed by Iso and Watanabe is a successful example of a speech recognition neural network with a high recognition rate. This model uses multilayer perceptrons for pattern prediction (not for pattern recognition), and achieves a recognition rate as high as 99.8% for speaker‐independent isolated words. This paper proposes a recurrent neural prediction model (RNPM), and a recurrent network architecture for this model. The proposed model very significantly reduces the size of the network, with as high a recognition rate as the original model, and with a high efficiency of learning, for speaker‐independent isolated words. © 2003 Wiley Periodicals, Inc. Syst Comp Jpn, 34(2): 100–107, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/scj.1194

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call