Abstract

AbstractMany proposals have been presented for the acquisition of inverse models in multilayered neural networks. However, most are concerned with the backpropagation rule or its improvement. In learning in a multilayered neural network based on the backpropagation rule, there must be a supervisor signal for the output layer, and there must be a particular path to propagate the learning signal in the reverse direction. In addition, convergence is slow due to the use of the method of steepest descent in updating the weights. Consequently, this paper proposes a forward‐propagation rule in which the neural network model is trained by propagating the motion error exhibited by the control object in the forward direction in the neural network. In the proposed algorithm, the extended Newton's method is used to derive the goal signal (which corresponds to the supervisor signal) in the hidden layer and the output layer. Since linear multiple regression can be used in weight updating for realizing the goal signals, the iteration of weight updating can be reduced compared to the method of steepest descent. A computer simulation was performed for acquisition of a two‐link arm model, and the effectiveness of the proposed learning scheme was verified. © 2005 Wiley Periodicals, Inc. Electron Comm Jpn Pt 2, 88(2): 59–68, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ecjb.20148

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call