Attractor neural network models have been extensively studied in maintaining brain working memory state, spatial representation, and other brain processing mechanisms. The coexistence of multiple attractors in nonlinear neural network is particularly worth studying. The attractor is a stable state in the dynamic system, and all adjacent states will eventually be pulled to the attractor over time. Attractors can be regarded as the expression of data in low-dimensional state space, and different kinds of attractors can be used to process different information. The coexistence of multiple attractors means that a neural network can store multiple patterns, that is to say different memory mechanisms appear at the same time. Investigating the coexistence and competition of attractors within Lotka–Volterra(LV) recurrent neural networks presents a pivotal avenue for understanding the dynamical underpinnings of complex systems, particularly in the realm of computational neuroscience and ecological modeling. This paper studies the coexistence of different types of attractors in asymmetric Lotka–Volterra recurrent neural networks. Through the weight matrix block and matrix decomposition theory, we obtain the explicit expressions of continuous attractors and discrete attractors. Since neurons have multiple grouping methods, each weight matrix block satisfying certain conditions corresponds to different attractors. Different from traditional symmetric networks, the eigenvalues of the asymmetric weight matrices may be complex, which may correspond to ring attractors. Our research shows that attractors can coexist under certain conditions, depending on the weight matrix and external input. Moreover, through simulation experiments, we show that external input will cause the competition of attractors. Finally, we provide several simulations of attractor coexistence to illustrate our theory.
Read full abstract