Modeling Higher-Order Interactions in Sparse and Heavy-Tailed Neural Population Activity.
Neurons process sensory stimuli efficiently, showing sparse yet highly variable ensemble spiking activity involving structured higher-order interactions. Notably, while neural populations are mostly silent, they occasionally exhibit highly synchronous activity, resulting in sparse and heavy-tailed spike-count distributions. However, its mechanistic origin-specifically, what types of nonlinear properties in individual neurons induce such population-level patterns-remains unclear. In this study, we derive sufficient conditions under which the joint activity of homogeneous binary neurons generates sparse and widespread population firing rate distributions in infinitely large networks. We then propose a subclass of exponential family distributions that satisfy this condition. This class incorporates structured higher-order interactions with alternating signs and shrinking magnitudes, along with a base-measure function that offsets distributional concentration, giving rise to parameter-dependent sparsity and heavy-tailed population firing rate distributions. Analysis of recurrent neural networks that recapitulate these distributions reveals that individual neurons possess threshold-like nonlinearity, followed by supralinear activation that jointly facilitates sparse and synchronous population activity. These nonlinear features resemble those in modern Hopfield networks, suggesting a connection between widespread population activity and the network's memory capacity. The theory establishes sparse and heavy-tailed distributions for binary patterns, forming a foundation for developing energy-efficient spike-based learning machines.
- Conference Article
2
- 10.1109/ijcnn.1992.227271
- Jun 7, 1992
Stability analysis of recurrent neural networks with a learning rule based on the concept of an equilibrium manifold is considered. Recurrent neural networks with learning rules have changing equilibria during the learning process. The authors design a learning rule that enables the recurrent neural network to store a desired pattern based on the concept of the equilibrium manifold. A stability criterion for the learning neural network is established and is a function of the learning rate, a sigmoid function and the upper bound of the interconnection strength. >
- Research Article
602
- 10.1109/tnnls.2014.2317880
- Jul 1, 2014
- IEEE Transactions on Neural Networks and Learning Systems
Stability problems of continuous-time recurrent neural networks have been extensively studied, and many papers have been published in the literature. The purpose of this paper is to provide a comprehensive review of the research on stability of continuous-time recurrent neural networks, including Hopfield neural networks, Cohen-Grossberg neural networks, and related models. Since time delay is inevitable in practice, stability results of recurrent neural networks with different classes of time delays are reviewed in detail. For the case of delay-dependent stability, the results on how to deal with the constant/variable delay in recurrent neural networks are summarized. The relationship among stability results in different forms, such as algebraic inequality forms, M-matrix forms, linear matrix inequality forms, and Lyapunov diagonal stability forms, is discussed and compared. Some necessary and sufficient stability conditions for recurrent neural networks without time delays are also discussed. Concluding remarks and future directions of stability analysis of recurrent neural networks are given.
- Research Article
1
- 10.1515/auto-2022-0032
- Aug 4, 2022
- at - Automatisierungstechnik
Neural networks are widely applied in control applications, yet providing safety guarantees for neural networks is challenging due to their highly nonlinear nature. We provide a comprehensive introduction to the analysis of recurrent neural networks (RNNs) using robust control and dissipativity theory. Specifically, we consider H 2 {\mathcal{H}_{2}} -performance and the ℓ 2 {\ell _{2}} -gain to quantify the robustness of dynamic RNNs with respect to input perturbations. First, we analyze the robustness of RNNs using the proposed robustness certificates and then, we present linear matrix inequality constraints to be used in training of RNNs to enforce robustness. Finally, we illustrate in a numerical example that the proposed approach enhances the robustness of RNNs.
- Research Article
30
- 10.1016/j.neucom.2016.04.052
- May 13, 2016
- Neurocomputing
Stability analysis of recurrent neural networks with interval time-varying delay via free-matrix-based integral inequality
- Research Article
8
- 10.1016/j.ejcon.2021.06.022
- Jul 10, 2021
- European Journal of Control
formula omitted] induced norm analysis of discrete-time LTI systems for nonnegative input signals and its application to stability analysis of recurrent neural networks
- Research Article
47
- 10.1109/tnnls.2021.3105519
- Mar 1, 2023
- IEEE Transactions on Neural Networks and Learning Systems
The stability analysis of recurrent neural networks (RNNs) with multiple equilibria has received extensive interest since it is a prerequisite for successful applications of RNNs. With the increasing theoretical results on this topic, it is desirable to review the results for a systematical understanding of the state of the art. This article provides an overview of the stability results of RNNs with multiple equilibria including complete stability and multistability. First, preliminaries on the complete stability and multistability analysis of RNNs are introduced. Second, the complete stability results of RNNs are summarized. Third, the multistability results of various RNNs are reviewed in detail. Finally, future directions in these interesting topics are suggested.
- Research Article
37
- 10.1016/j.neunet.2017.09.013
- Oct 14, 2017
- Neural Networks
Multistability and instability analysis of recurrent neural networks with time-varying delays
- Research Article
32
- 10.1142/s025295990400041x
- Oct 1, 2004
- Chinese Annals of Mathematics
The authors investigate the existence and the global stability of periodic solution for dynamical systems with periodic interconnections, inputs and self-inhibitions. The model is very general, the conditions are quite weak and the results obtained are universal.
- Conference Article
5
- 10.1109/cdc45484.2021.9683530
- Dec 14, 2021
This paper is concerned with the stability analysis of the recurrent neural networks (RNNs) by means of the integral quadratic constraint (IQC) framework. The rectified linear unit (ReLU) is typically employed as the activation function of the RNN, and the ReLU has specific nonnegativity properties regarding its input and output signals. Therefore, it is effective if we can derive IQC-based stability conditions with multipliers taking care of such nonnegativity properties. However, such nonnegativity (linear) properties are hardly captured by the existing multipliers defined on the positive semidefinite cone. To get around this difficulty, we loosen the standard positive semidefinite cone to the copositive cone, and employ copositive multipliers to capture the nonnegativity properties. We show that, within the framework of the IQC, we can employ copositive multipliers (or their inner approximation) together with existing multipliers such as Zames-Falb multipliers and polytopic bounding multipliers, and this directly enables us to ensure that the introduction of the copositive multipliers leads to better (no more conservative) results. We finally illustrate the effectiveness of the IQC-based stability conditions with the copositive multipliers by numerical examples.
- Research Article
5
- 10.1038/s41598-022-11032-y
- May 12, 2022
- Scientific Reports
Histamine is a neurotransmitter that modulates neuronal activity and regulates various brain functions. Histamine H3 receptor (H3R) antagonists/inverse agonists enhance its release in most brain regions, including the cerebral cortex, which improves learning and memory and exerts an antiepileptic effect. However, the mechanism underlying the effect of H3R antagonists/inverse agonists on cortical neuronal activity in vivo remains unclear. Here, we show the mechanism by which pitolisant, an H3R antagonist/inverse agonist, influenced perirhinal cortex (PRh) activity in individual neuron and neuronal population levels. We monitored neuronal activity in the PRh of freely moving mice using in vivo Ca2+ imaging through a miniaturized one-photon microscope. Pitolisant increased the activity of some PRh neurons while decreasing the activity of others without affecting the mean neuronal activity across neurons. Moreover, it increases neuron pairs with synchronous activity in excitatory-responsive neuronal populations. Furthermore, machine learning analysis revealed that pitolisant altered the neuronal population activity. The changes in the population activity were dependent on the neurons that were excited and inhibited by pitolisant treatment. These findings indicate that pitolisant influences the activity of a subset of PRh neurons by increasing the synchronous activity and modifying the population activity.
- Conference Article
1
- 10.1109/biocas.2018.8584741
- Oct 1, 2018
Estimating the current memory capacity of a neural network based recognition system is critical to maximally use the available memory capacity in memorizing new inputs without exceeding the limit of the capacity (catastrophic forgetting). In this paper, we propose a dynamic approach to monitoring a network's memory capacity. Prior works in this area have presented static expressions dependent on neuron count N, forcing to assume the worst-case input characteristics for bias and correlation when setting the capacity of the network. Instead, our technique operates simultaneously with the learning of a Hopfield network and concludes with a capacity estimate based on the patterns which were stored. By continuously updating the crosstalk associated with the stored patterns, our model guards the network against overwriting its memory traces and exceeding its capacity. We designed a fingerprint recognition system based on our dynamic estimation technique. With the experiment using NIST Special Database 10, the system achieves 2.7 to 8X larger memory-capacity as compared to the baseline systems using the static capacity estimates.
- Research Article
55
- 10.1002/cne.1047
- Jun 8, 2001
- Journal of Comparative Neurology
Brain structures that can generate epileptiform activity possess excitatory interconnections among principal cells and a subset of these neurons that can be spontaneously active ("pacemaker" cells). We describe electrophysiological evidence for excitatory interactions among rat subicular neurons. Subiculum was isolated from presubiculum, CA1, and entorhinal cortex in ventral horizontal slices. Nominally zero magnesium perfusate, picrotoxin (100 microM), or NMDA (20 microM) was used to induce spontaneous firing in subicular neurons. Synchronous population activity and the spread of population events from one end of subiculum to the other in isolated subicular subslices indicate that subicular pyramidal neurons are coupled together by excitatory synapses. Both electrophysiological classes of subicular pyramidal cells (bursting and regular spiking) exhibited synchronous activity, indicating that both cell classes are targets of local excitatory inputs. Burst firing neurons were active in the absence of synchronous activity in field recordings, indicating that these cells may serve as pacemaker neurons for the generation of epileptiform activity in subiculum. Epileptiform events could originate at either proximal or distal segments of the subiculum from ventral horizontal slices. In some slices, events originated in both proximal and distal locations and propagated to the other location. Finally, propagation was supported over axonal paths through the cell layer and in the apical dendritic zone. We conclude that subicular burst firing and regular spiking neurons are coupled by means of glutamatergic synapses. These connections may serve to distribute activity driven by topographically organized inputs and to synchronize subicular cell activity.
- Research Article
8
- 10.1155/2010/191546
- Jan 1, 2010
- Journal of Inequalities and Applications
In this paper, the exponential stability analysis problem is considered for a class of recurrent neural networks (RNNs) with random delay and Markovian switching. The evolution of the delay is modeled by a continuous-time homogeneous Markov process with a finite number of states. The main purpose of this paper is to establish easily verifiable conditions under which the random delayed recurrent neural network with Markovian switching is exponentially stable. The analysis is based on the Lyapunov-Krasovskii functional and stochastic analysis approach, and the conditions are expressed in terms of linear matrix inequalities, which can be readily checked by using some standard numerical packages such as the Matlab LMI Toolbox. A numerical example is exploited to show the usefulness of the derived LMI-based stability conditions.
- Research Article
17
- 10.1364/ol.472267
- Oct 18, 2022
- Optics Letters
In this work, we analyze different types of recurrent neural networks (RNNs) working under several different parameters to best model the nonlinear optical dynamics of pulse propagation. Here we studied the propagation of picosecond and femtosecond pulses under distinct initial conditions going through 13 m of a highly nonlinear fiber and demonstrated the application of two RNNs returning error metrics such as normalized root mean squared error (NRMSE) as low as 9%. Those results were further extended for a dataset outside the initial pulse conditions used on the RNN training, and the best-proposed network was still able to achieve a NRMSE below 14%. We believe that this study can contribute to a better understanding of building RNNs employed for modeling nonlinear optical pulse propagation and of how the peak power and nonlinearity affect the prediction error.
- Research Article
12
- 10.1016/j.tcs.2003.09.006
- Sep 28, 2003
- Theoretical Computer Science
Global exponential convergence of recurrent neural networks with variable delays
- New
- Research Article
- 10.1162/neco.a.36
- Oct 29, 2025
- Neural computation
- New
- Research Article
- 10.1162/neco.a.37
- Oct 29, 2025
- Neural computation
- New
- Research Article
- 10.1162/neco.a.38
- Oct 29, 2025
- Neural computation
- Research Article
- 10.1162/neco.a.27
- Oct 10, 2025
- Neural computation
- Research Article
- 10.1162/neco.a.35
- Oct 10, 2025
- Neural computation
- Research Article
- 10.1162/neco.a.34
- Oct 10, 2025
- Neural computation
- Research Article
- 10.1162/neco.a.30
- Oct 10, 2025
- Neural computation
- Research Article
- 10.1162/neco.a.28
- Sep 22, 2025
- Neural computation
- Research Article
- 10.1162/neco.a.29
- Sep 22, 2025
- Neural computation
- Research Article
- 10.1162/neco.a.26
- Sep 22, 2025
- Neural computation
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.