Abstract

Mathematical model of open dynamic recurrent neural network, that hasn't hidden neurons, is described. Such network has dynamic attractors, that are sequences of transitions between one attractor state to another, according to input signal sequences. Concept of “freezing” of such dynamics with the use of virtual static recurrent network is proposed. Solution of generalized stability equation is used for development of non-iterative method for training dynamic recurrent networks. Estimations of attraction radius and training set size are obtained. Using of the open dynamic recurrent network as dynamic associative memory is studied and possibility of control of dynamic attractors by changing level of influence of different feedback components is shown. Software model of the network was developed, and experimental study of its behavior for reproducing of sequences of distorted vectors was performed. Analogy between dynamic attractors and neural activity patterns, that support hypothesis of local neural ensembles, with structure and functions similar to dynamic recurrent networks in neocortex, is remarked.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.