Abstract

Mathematical model of open dynamic recurrent neural network, that hasn't hidden neurons, is described. Such network has dynamic attractors, that are sequences of transitions between one attractor state to another, according to input signal sequences. Concept of “freezing” of such dynamics with the use of virtual static recurrent network is proposed. Solution of generalized stability equation is used for development of non-iterative method for training dynamic recurrent networks. Estimations of attraction radius and training set size are obtained. Using of the open dynamic recurrent network as dynamic associative memory is studied and possibility of control of dynamic attractors by changing level of influence of different feedback components is shown. Software model of the network was developed, and experimental study of its behavior for reproducing of sequences of distorted vectors was performed. Analogy between dynamic attractors and neural activity patterns, that support hypothesis of local neural ensembles, with structure and functions similar to dynamic recurrent networks in neocortex, is remarked.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call