Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • Research Article
  • 10.1162/neco_a_01762
Rapid Memory Encoding in a Spiking Hippocampus Circuit Model.
  • Jun 17, 2025
  • Neural computation
  • Jiashuo Wang + 4 more

Memory is a complex process in the brain that involves the encoding, consolidation, and retrieval of previously experienced stimuli. The brain is capable of rapidly forming memories of sensory input. However, applying the memory system to real-world data poses challenges in practical implementation. This article demonstrates that through the integration of sparse spike pattern encoding scheme population tempotron, and various spike-timing-dependent plasticity (STDP) learning rules, supported by bounded weights and biological mechanisms, it is possible to rapidly form stable neural assemblies of external sensory inputs in a spiking neural circuit model inspired by the hippocampal structure. The model employs neural ensemble module and competitive learning strategies that mimic the pattern separation mechanism of the hippocampal dentate gyrus (DG) area to achieve nonoverlapping sparse coding. It also uses population tempotron and NMDA-(N-methyl-D-aspartate)mediated STDP to construct associative and episodic memories, analogous to the CA3 and CA1 regions. These memories are represented by strongly connected neural assemblies formed within just a few trials. Overall, this model offers a robust computational framework to accommodate rapid memory throughout the brain-wide memory process.

  • Research Article
  • Cite Count Icon 1
  • 10.1162/neco_a_01764
A Survey on Artificial Neural Networks in Human-Robot Interaction.
  • Jun 17, 2025
  • Neural computation
  • Aleksandra Ĺšwietlicka

Artificial neural networks (ANNs) have shown great potential in enhancing human-robot interaction (HRI). ANNs are computational models inspired by the structure and function of biological neural networks in the brain, which can learn from examples and generalize to new situations. ANNs can be used to enable robots to interact with humans in a more natural and intuitive way by allowing them to recognize human gestures and expressions, understand natural language, and adapt to the environment. ANNs can also be used to improve robot autonomy, allowing robots to learn from their interactions with humans and to make more informed decisions. However, there are also challenges to using ANNs in HRI, including the need for large amounts of training data, issues with explainability, and the potential for bias. This review explores the current state of research on ANNs in HRI, highlighting both the opportunities and challenges of this approach and discussing potential directions for future research. The AI contribution involves applying ANNs to various aspects of HRI, while the application in engineering involves using ANNs to develop more interactive and intuitive robotic systems.

  • Research Article
  • 10.1162/neco_a_01760
Decision Threshold Learning in the Basal Ganglia for Multiple Alternatives.
  • Jun 17, 2025
  • Neural computation
  • Thom Griffith + 2 more

In recent years, researchers have integrated the historically separate, reinforcement learning (RL), and evidence-accumulation-to-bound approaches to decision modeling. A particular outcome of these efforts has been the RL-DDM, a model that combines value learning through reinforcement with a diffusion decision model (DDM). While the RL-DDM is a conceptually elegant extension of the original DDM, it faces a similar problem to the DDM in that it does not scale well to decisions with more than two options. Furthermore, in its current form, the RL-DDM lacks flexibility when it comes to adapting to rapid, context-cued changes in the reward environment. The question of how to best extend combined RL and DDM models so they can handle multiple choices remains open. Moreover, it is currently unclear how these algorithmic solutions should map to neurophysical processes in the brain, particularly in relation to so-called go/no-go-type models of decision making in the basal ganglia. Here, we propose a solution that addresses these issues by combining a previously proposed decision model based on the multichoice sequential probability ratio test (MSPRT), with a dual-pathway model of decision threshold learning in the basal ganglia region of the brain. Our model learns decision thresholds to optimize the trade-off between time cost and the cost of errors and so efficiently allocates the amount of time for decision deliberation. In addition, the model is context dependent and hence flexible to changes to the speed-accuracy trade-off (SAT) in the environment. Furthermore, the model reproduces the magnitude effect, a phenomenon seen experimentally in value-based decisions and is agnostic to the types of evidence and so can be used on perceptual decisions, value-based decisions, and other types of modeled evidence. The broader significance of the model is that it contributes to the active research area of how learning systems interact by linking the previously separate models of RL-DDM to dopaminergic models of motivation and risk taking in the basal ganglia, as well as scaling to multiple alternatives.

  • Open Access Icon
  • Research Article
  • 10.1162/neco_a_01763
Excitation-Inhibition Balance Controls Synchronization in a Simple Model of Coupled Phase Oscillators.
  • Jun 17, 2025
  • Neural computation
  • Satoshi Kuroki + 1 more

Collective neuronal activity in the brain synchronizes during rest and desynchronizes during active behaviors, influencing cognitive processes such as memory consolidation, knowledge abstraction, and creative thinking. These states involve significant modulation of inhibition, which alters the excitation-inhibition (EI) balance of synaptic inputs. However, the influence of the EI balance on collective neuronal oscillation remains only partially understood. In this study, we introduce the EI-Kuramoto model, a modified version of the Kuramoto model, in which oscillators are categorized into excitatory and inhibitory groups with four distinct interaction types: excitatory-excitatory, excitatory-inhibitory, inhibitory-excitatory, and inhibitory-inhibitory. Numerical simulations identify three dynamic states-synchronized, bistable, and desynchronized-that can be controlled by adjusting the strength of the four interaction types. Theoretical analysis further demonstrates that the balance among these interactions plays a critical role in determining the dynamic states. This study provides valuable insights into the role of EI balance in synchronizing coupled oscillators and neurons.

  • Research Article
  • 10.1162/neco_a_01755
Memory States From Almost Nothing: Representing and Computing in a Nonassociative Algebra.
  • May 14, 2025
  • Neural computation
  • Stefan Reimann

This letter presents a nonassociative algebraic framework for the representation and computation of information items in high-dimensional space. This framework is consistent with the principles of spatial computing and with the empirical findings in cognitive science about memory. Computations are performed through a process of multiplication-like binding and nonassociative interference-like bundling. Models that rely on associative bundling typically lose order information, which necessitates the use of auxiliary order structures, such as position markers, to represent sequential information that is important for cognitive tasks. In contrast, the nonassociative bundling proposed allows the construction of sparse representations of arbitrarily long sequences that maintain their temporal structure across arbitrary lengths. In this operation, noise is a constituent element of the representation of order information rather than a means of obscuring it. The nonassociative nature of the proposed framework results in the representation of a single sequence by two distinct states. The L-state, generated through left-associative bundling, continuously updates and emphasizes a recency effect, while the R-state, formed through right-associative bundling, encodes finite sequences or chunks, capturing a primacy effect. The construction of these states may be associated with activity in the prefrontal cortex in relation to short-term memory and hippocampal encoding in long-term memory, respectively. The accuracy of retrieval is contingent on a decision-making process that is based on the mutual information between the memory states and the cue. The model is able to replicate the serial position curve, which reflects the empirical recency and primacy effects observed in cognitive experiments.

  • Research Article
  • Cite Count Icon 1
  • 10.1162/neco_a_01754
Neural Code Translation With LIF Neuron Microcircuits.
  • May 14, 2025
  • Neural computation
  • Ville Karlsson + 1 more

Spiking neural networks (SNNs) provide an energy-efficient alternative to traditional artificial neural networks, leveraging diverse neural encoding schemes such as rate, time-to-first-spike (TTFS), and population-based binary codes. Each encoding method offers distinct advantages: TTFS enables rapid and precise transmission with minimal energy use, rate encoding provides robust signal representation, and binary population encoding aligns well with digital hardware implementations. This letter introduces a set of neural microcircuits based on leaky integrate-and-fire neurons that enable translation between these encoding schemes. We propose two applications showcasing the utility of these microcircuits. First, we demonstrate a number comparison operation that significantly reduces spike transmission by switching from rate to TTFS encoding. Second, we present a high-bandwidth neural transmitter capable of encoding and transmitting binary population-encoded data through a single axon and reconstructing it at the target site. Additionally, we conduct a detailed analysis of these microcircuits, providing quantitative metrics to assess their efficiency in terms of neuron count, synaptic complexity, spike overhead, and runtime. Our findings highlight the potential of LIF neuron microcircuits in computational neuroscience and neuromorphic computing, offering a pathway to more interpretable and efficient SNN designs.

  • Research Article
  • 10.1162/neco_a_01756
Low-Rank, High-Order Tensor Completion via t- Product-Induced Tucker (tTucker) Decomposition.
  • May 14, 2025
  • Neural computation
  • Yaodong Li + 4 more

Recently, tensor singular value decomposition (t-SVD)-based methods were proposed to solve the low-rank tensor completion (LRTC) problem, which has achieved unprecedented success on image and video inpainting tasks. The t-SVD is limited to process third-order tensors. When faced with higher-order tensors, it reshapes them into third-order tensors, leading to the destruction of interdimensional correlations. To address this limitation, this letter introduces a tproductinduced Tucker decomposition (tTucker) model that replaces the mode product in Tucker decomposition with t-product, which jointly extends the ideas of t-SVD and high-order SVD. This letter defines the rank of the tTucker decomposition and presents an LRTC model that minimizes the induced Schatten-p norm. An efficient alternating direction multiplier method (ADMM) algorithm is developed to optimize the proposed LRTC model, and its effectiveness is demonstrated through experiments conducted on both synthetic and real data sets, showcasing excellent performance.

  • Research Article
  • 10.1162/neco_a_01757
Dynamics of Continuous Attractor Neural Networks With Spike Frequency Adaptation.
  • May 14, 2025
  • Neural computation
  • Yujun Li + 2 more

Attractor neural networks consider that neural information is stored as stationary states of a dynamical system formed by a large number of interconnected neurons. The attractor property empowers a neural system to encode information robustly, but it also incurs the difficulty of rapid update of network states, which can impair information update and search in the brain. To overcome this difficulty, a solution is to include adaptation in the attractor network dynamics, whereby the adaptation serves as a slow negative feedback mechanism to destabilize what are otherwise permanently stable states. In such a way, the neural system can, on one hand, represent information reliably using attractor states, and on the other hand, perform computations wherever rapid state updating is involved. Previous studies have shown that continuous attractor neural networks with adaptation (A-CANNs) exhibit rich dynamical behaviors accounting for various brain functions. In this review, we present a comprehensive view of the rich diverse dynamics of A-CANNs. Moreover, we provide a unified mathematical framework to understand these different dynamical behaviors and briefly discuss their biological implications.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 2
  • 10.1162/neco_a_01758
Dynamics and Bifurcation Structure of a Mean-Field Model of Adaptive Exponential Integrate-and-Fire Networks.
  • May 14, 2025
  • Neural computation
  • Lionel Kusch + 3 more

The study of brain activity spans diverse scales and levels of description and requires the development of computational models alongside experimental investigations to explore integrations across scales. The high dimensionality of spiking networks presents challenges for understanding their dynamics. To tackle this, a mean-field formulation offers a potential approach for dimensionality reduction while retaining essential elements. Here, we focus on a previously developed mean-field model of adaptive exponential integrate and fire (AdEx) networks used in various research work. We observe qualitative similarities in the bifurcation structure but quantitative differences in mean firing rates between the mean-field model and AdEx spiking network simulations. Even if the mean-field model does not accurately predict phase shift during transients and oscillatory input, it generally captures the qualitative dynamics of the spiking network's response to both constant and varying inputs. Finally, we offer an overview of the dynamical properties of the AdExMF to assist future users in interpreting their results of simulations.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 3
  • 10.1162/neco_a_01752
Elucidating the Theoretical Underpinnings of Surrogate Gradient Learning in Spiking Neural Networks.
  • Apr 17, 2025
  • Neural computation
  • Julia Gygax + 1 more

Training spiking neural networks to approximate universal functions is essential for studying information processing in the brain and for neuromorphic computing. Yet the binary nature of spikes poses a challenge for direct gradient-based training. Surrogate gradients have been empirically successful in circumventing this problem, but their theoretical foundation remains elusive. Here, we investigate the relation of surrogate gradients to two theoretically well-founded approaches. On the one hand, we consider smoothed probabilistic models, which, due to the lack of support for automatic differentiation, are impractical for training multilayer spiking neural networks but provide derivatives equivalent to surrogate gradients for single neurons. On the other hand, we investigate stochastic automatic differentiation, which is compatible with discrete randomness but has not yet been used to train spiking neural networks. We find that the latter gives surrogate gradients a theoretical basis in stochastic spiking neural networks, where the surrogate derivative matches the derivative of the neuronal escape noise function. This finding supports the effectiveness of surrogate gradients in practice and suggests their suitability for stochastic spiking neural networks. However, surrogate gradients are generally not gradients of a surrogate loss despite their relation to stochastic automatic differentiation. Nevertheless, we empirically confirm the effectiveness of surrogate gradients in stochastic multilayer spiking neural networks and discuss their relation to deterministic networks as a special case. Our work gives theoretical support to surrogate gradients and the choice of a suitable surrogate derivative in stochastic spiking neural networks.