Abstract
A class of data-reusing learning algorithms for real-time recurrent neural networks (RNNs) is analyzed. The analysis is undertaken for a general sigmoid nonlinear activation function of a neuron for the real time recurrent learning training algorithm. Error bounds and convergence conditions for such data-reusing algorithms are provided for both contractive and expansive activation functions. The analysis is undertaken for various configurations that are generalizations of a linear structure infinite impulse response adaptive filter.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have