Abstract

It is shown that the structure of the standard recurrent neural network has the capacity to model a broad class of nonlinear dynamic systems. The key result is that the structure of the recurrent neural network permits the internal formation of a single hidden layer/linear output layer feedforward neural network to approximate the next system state as a function of the current system state and the inputs. The recurrent nature of the network allows the single weight matrix to serve as both the input and output weight matrices of the internal feedforward network. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call