Abstract
In this paper we consider the justification of the “Off Line Approximation” in continuous time neural networks from a rigorous mathematical point of view. In real time models, the behavior of a network is characterized by two distinct dynamics evolving according different time scales, the weight dynamics which are the “slow” and the activation dynamics which are the “fast.” The “off-line approximation” assumes that during the learning process, neural activities are in their steady states. Such an approximation is a common dogma often used to provide an analysis of network behavior. In this paper we consider convergent networks and prove that this approximation is valid on the time scale 1/ε where ε is the learning rate parameter which controls the learning velocity. We apply these results to prove the stability of Hebbian learning in a two-layered neural network which can be seen as a continuous time version of the self-organizing Kohonen's model.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.