Abstract

In this paper, fully connected RTRL neural networks are studied. In order to learn dynamical behaviours of continuous time processes or to predict numerical time series, an autonomous learning algorithm has been developed. The originality of this method consists in the gradient-based adaptation of the learning rate and time parameter of neurons using a small perturbations method. Starting from zero initial conditions (neural states, rate of learning, time parameter and matrix of weights) the evolution is completely driven by the dynamic of the learning data. Stability issues are discussed, and several examples are investigated in order to compare the performances of the adaptive learning rate and time parameter algorithm with the constant parameters one.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.