- New
- Research Article
- 10.1007/s10827-025-00915-4
- Jan 15, 2026
- Journal of computational neuroscience
- New
- Research Article
- 10.1007/s10827-025-00918-1
- Jan 15, 2026
- Journal of computational neuroscience
- Yunran Chen + 2 more
Understanding how neurons encode multiple simultaneous stimuli is a fundamental question in neuroscience. We have previously introduced a novel theory of stochastic encoding patterns wherein a neuron's spiking activity dynamically switches among its constituent single-stimulus activity patterns when presented with multiple stimuli (Groh et al., 2024). Here, we present an enhanced, comprehensive statistical testing framework for such "multiplexing". As before, our approach evaluates whether dual-stimulus responses can be accounted for as mixtures of Poissons related to single-stimulus benchmarks. Our enhanced framework improves upon previous methods in two key ways. First, it introduces a stronger set of foils for multiplexing, including an "overreaching" category that captures overdispersed activity patterns unrelated to the single-stimulus benchmarks, reducing false detection of multiplexing. Second, it detects continuous mixtures, potentially indicating faster fluctuations - i.e. at sub-trial timescales - that would have been overlooked before. We utilize a Bayesian inference framework, considering the hypothesis with the highest posterior probability as the winner, and employ the predictive recursion marginal likelihood method for non-parametric estimation of the latent mixing distributions. Reanalysis of previous findings confirms the general observation of fluctuating activity and indicates that fluctuations may well occur on faster timescales than previously suggested. We further confirm that multiplexing is more prevalent for (a) combinations of face stimuli than for faces and non-face objects in the inferotemporal face patch system; and (b) distinct vs fused objects in the primary visual cortex.
- New
- Research Article
- 10.1007/s10827-025-00917-2
- Jan 6, 2026
- Journal of computational neuroscience
- Nils A Koch + 2 more
Depolarizations that occur after action potentials, known as afterdepolarization potentials or ADPs, are important for neuronal excitability and stimulus evoked transient bursting. Slow inward and fast outward currents underlie the generation of such ADPs with modulation of ADP amplitudes occurring as a result of neuronal morphology. However, the relative contribution and role of these slow inward and fast outward currents in ADP generation is poorly understood in the context of somatic and dendritic localization as well as with varied dendritic properties. Using a two-compartment Hodgkin-Huxley type model of cerebellar stellate cells, the role of somatic and dendritic compartmentalization of ADP associated currents is investigated, revealing that dendritic (rather than somatic) slow inward and fast outward currents are the main contributors to ADP and spike-adding during both brief step current and AMPA current input. Additionally, dendritic size and passive properties of the dendrites were found to be key modulators of ADP amplitude. However, increasing magnitudes of NMDA current input resulted in nonmonotonic spike-adding in a manner dependent on dendritic Ca2+influx and Ca2+activated K+currents, which was found to arise from tight regulation of stimulus evoked transient bursting through positive feedback on action potential generation by dendritic Ca2+and subsequent negative feedback through Ca2+activated K+currents. This novel mechanism of ADPs and spike-adding regulation highlights the role of currents with slow timescales in ADPs, stimulus evoked transient bursting and neuronal excitability with implications for Ca2+dependent synaptic plasticity and neuromodulation.
- Research Article
- 10.1007/s10827-025-00916-3
- Dec 1, 2025
- Journal of computational neuroscience
- Aleksandra Rybalko + 1 more
The paper addresses the problem of parameter estimation (or identification) in dynamical networks composed of an arbitrary number of FitzHugh-Nagumo neuron models with diffusive couplings between each other. It is assumed that only the membrane potential of each model is measured, while the other state variable and all derivatives remain unmeasured. Additionally, constant potential measurement errors in the membrane potential due to sensor imprecision are considered. To solve this problem, firstly, the original FitzHugh-Nagumo network is transformed into a linear regression model, where the regressors are obtained by applying a filter-differentiator to specific combinations of the measured variables. Secondly, the speed-gradient method is applied to this linear model, leading to the design of an identification algorithm for the FitzHugh-Nagumo neural network. Sufficient conditions for the asymptotic convergence of the parameter estimates to their true values are derived for the proposed algorithm. Parameter estimation for some networks is demonstrated through computer simulation. The results confirm that the sufficient conditions are satisfied in the numerical experiments conducted. Furthermore, the algorithm's capabilities for adjusting the identification accuracy and time are investigated. The proposed approach has potential applications in nervous system modeling, particularly in the context of human brain modeling. For instance, EEG signals could serve as the measured variables of the network, enabling the integration of mathematical neural models with empirical data collected by neurophysiologists.
- Research Article
1
- 10.1007/s10827-025-00912-7
- Dec 1, 2025
- Journal of computational neuroscience
- Xuelin Huang + 3 more
Transcranial alternating current stimulation (tACS) enables non-invasive modulation of brain activity, holding promise for cognitive research and clinical applications. However, it remains unclear how the spiking activity of cortical neurons is modulated by specific electric field (E-field) distributions. Here, we use a multi-scale computational framework that integrates an anatomically accurate head model with morphologically realistic neuron models to simulate the responses of layer 5 pyramidal cells (L5 PCs) to the E-fields generated by conventional M1-SO tACS. Neural entrainment is quantified by calculating the phase-locking value (PLV) and preferred phase (PPh). We find that the tACS-induced E-field distributions across the L5 surface of interest (SOI) are heterogeneous, resulting in diverse neural entrainment of L5 PCs due to their sensitivities to the direction and intensity of the E-fields. Both PLV and PPh follow a smooth cosine dependency on the E-field polar angle, with minimal sensitivity to the azimuthal angle. PLV exhibits a positive linear dependence on the E-field intensity. However, PPh either increases or decreases logarithmically with E-field intensity that depends on the E-field direction. Correlation analysis reveals that neural entrainment can be largely explained by the normal component of the E-field or by somatic polarization, especially for E-field directed outward relative to the cortical surface. Moreover, cell morphology plays a crucial role in shaping the diverse neural entrainment to tACS. Although the uniform E-field extracted at the soma provides a good approximation for modeling tACS at the cellular level, the non-uniform E-field distribution should be considered for investigating more accurate cellular mechanisms of tACS. These findings highlight the crucial roles of heterogeneous E-field distributions, cell morphology, and E-field non-uniformity in modulating neuronal spiking activity by tACS in realistic neuroanatomy, deepening our understanding of the cellular mechanism underlying tACS. Our work bridges macroscopic brain stimulation with microscopic neural activity, which benefits the development of brain models and derived clinical applications relying on model-driven brain stimulation with tACS-induced weak E-fields.
- Research Article
- 10.1007/s10827-025-00905-6
- Aug 5, 2025
- Journal of computational neuroscience
- Tangsen Huang + 2 more
- Research Article
- 10.1007/s10827-025-00910-9
- Jul 29, 2025
- Journal of computational neuroscience
- Xuewen Shen + 2 more
Understanding the mechanism of accumulating evidence over time in deliberate decision-making is crucial for both humans and animals. While numerous models have been proposed over the past few decades to characterize the temporal weighting of evidence, the dynamical principle governing the neural circuits in decision making remain elusive. In this study, we proposed a solvable rank-1 neural circuit model to address this problem. We first derived an analytical expression for integration kernel, a key quantity describing how sensory evidence at different time points is weighted with respect to the final decision. Based on this expression, we illustrated that how the dynamics introduced in the auxiliary space-namely, a subspace orthogonal to the decision variable-modulates the flow fields of decision variable through a gain modulation mechanism, resulting in a variety of integration kernel types, including not only monotonic ones (recency and primacy) but also non-monotonic ones (convex and concave). Furthermore, we quantitatively validated that integration kernel shapes can be understood from dynamical landscapes and non-monotonic temporal weighting reflects topological transitions in the landscape. Additionally, we showed that training on networks with non-optimal weighting leads to convergence toward optimal weighting. Finally, we demonstrate that rank-1 connectivity induces symmetric competition to generate pitchfork bifurcation. In summary, we present a solvable neural circuit model that unifies diverse types of temporal weighting, providing an intriguing link between non-monotonic integration kernel structure and topological transitions of dynamical landscape.
- Research Article
1
- 10.1007/s10827-025-00906-5
- Jul 1, 2025
- Journal of computational neuroscience
- Ibeachu P Chinagorom + 1 more
Computational Neuroscience (CN) is an interdisciplinary field that combines neuroscience, mathematics, artificial intelligence, theoretical models and experimental data to understand how the brain works. It unravels the intricacies of the nervous system contributing significantly to cognitive science, neuroengineering and machine learning. CN importance in artificial intelligence and medical research remains underrepresented in Africa's academic landscape. This paper explores the current state of CN in Africa, the challenges hindering its integration, the emerging opportunities, and the evidence-based strategies for curriculum implementation. Capacity building, interdisciplinary collaboration, open science, theoretical neuroscience, development of local capacity, and leveraging international partnerships are emphasized.
- Research Article
- 10.1007/s10827-025-00904-7
- Apr 23, 2025
- Journal of computational neuroscience
- Claudio Di Geronimo + 2 more
We present a mean field model for a spiking neural network of excitatory and inhibitory neurons with fast GABA and nonlinear slow GABA inhibitory conductance-based synapses. This mean field model can predict the spontaneous and evoked response of the network to external stimulation in asynchronous irregular regimes. The model displays theta oscillations for sufficiently strong GABA conductance. Optogenetic activation of interneurons and an increase of GABA conductance caused opposite effects on the emergence of gamma oscillations in the model. In agreement with direct numerical simulations of neural networks and experimental data, the mean field model predicts that an increase of GABA conductance reduces gamma oscillations. Furthermore, the slow dynamics of GABA synapses regulates the appearance and duration of transient gamma oscillations, namely gamma bursts, in the mean field model. Finally, we show that nonlinear GABA synapses play a major role to stabilize the network from the emergence of epileptic seizures.
- Research Article
- 10.1007/s10827-025-00902-9
- Apr 17, 2025
- Journal of Computational Neuroscience
- Anna Jing + 3 more
Synaptic and neural properties can change during periods of auditory deprivation. These changes may disrupt the computations that neurons perform. In the brainstem of chickens, auditory deprivation can lead to changes in the size and biophysics of the axon initial segment (AIS) of neurons in the sound source localization circuit. This is the phenomenon of axon initial segment (AIS) plasticity. Individuals who use cochlear implants (CIs) experience periods of hearing loss, and so we ask whether AIS plasticity in neurons of the medial superior olive (MSO), a key stage of sound location processing, would impact time difference sensitivity in the scenario of hearing with cochlear implants. The biophysical changes that we implement in our model of AIS plasticity include enlargement of the AIS and replacement of low-threshold potassium conductance with the more slowly-activated M-type potassium conductance. AIS plasticity has been observed to have a homeostatic effect with respect to excitability. In our model, AIS plasticity has the additional effect of converting MSO neurons from phasic firing type to tonic firing type. Phasic firing is known to have greater temporal sensitivity to coincident inputs. Consistent with this, we find AIS plasticity degrades time difference sensitivity in the auditory deprived MSO neuron model across a range of stimulus parameters. Our study illustrates a possible mechanism of cellular plasticity in a non-peripheral stage of neural processing that could impose barriers to sound source localization by bilateral cochlear implant users.