- New
- Research Article
- 10.1162/neco.a.1505
- Mar 17, 2026
- Neural computation
- Pascal J Sager + 4 more
We introduce the cooperative network architecture (CNA), a model that represents sensory signals using structured, recurrently connected networks of neurons, termed "nets." Nets are dynamically assembled from overlapping net fragments, which are learned based on statistical regularities in sensory input. This architecture offers robustness to noise, deformation, and generalization to out-of-distribution data, addressing challenges in current vision systems from a novel perspective. We demonstrate that net fragments can be learned without supervision and flexibly recombined to encode novel patterns, enabling figurecompletion and resilience to noise. Our findings establish CNA as a promising paradigm for developing neural representations that integrate local feature processing with global structure formation, providing a foundation for future research on invariant object recognition.
- New
- Research Article
- 10.1162/neco.a.1502
- Mar 17, 2026
- Neural computation
- Dmitri Rachkovskij + 4 more
This article introduces a family of multiclass linear perceptron classifiers with a multiplicative margin mechanism (MMPerc), as an alternative to standard margin-free and additive margin perceptrons. The multiplicative formulation enforces classification confidence by requiring the true class score to exceed that of competing classes by a specified fraction of itself rather than by a fixed additive threshold. This avoids dependence on score magnitudes arising from varied norms of data and class weight vectors. We propose several architectural and algorithmic variants of MMPerc, derive associated loss functions and mistake bounds for both linearly separable and nonseparable data, and analyze key design considerations, including bias, margin threshold selection, and training modes. Extensive experiments on synthetic and real data sets show that MMPerc classifiers typically outperform the standard perceptron, as well as classic baselines such as support vector machines and ridge classifiers. Owing to their simplicity, minimalistic design, and computational efficiency, MMPerc classifiers are promising candidates for conventional machine learning tasks, linear evaluation of deep neural networks, integration with hyperdimensional computing and vector symbolic architecture representations, and deployment in resource-constrained applications.
- New
- Research Article
- 10.1162/neco.a.1503
- Mar 17, 2026
- Neural computation
- Takashi Kanamaru + 1 more
Force learning is a learning method for generating various types of complex dynamics in recurrent neural networks (RNNs), which is related to the reservoir computing (RC). RC uses an RNN called reservoir whose synaptic weights are randomly generated and fixed during learning. Force learning trains these synaptic weights inside the reservoir networks. Although force learning can be used as an effective tool for machine learning, possibilities of its realization in the brain are not often discussed. Here, in order to consider the possibilities of its realization in the brain, force learning is applied to an excitatory and inhibitory (E-I) network that models the cerebral cortex. A multimodule network composed of excitatory and inhibitory neurons is defined, and a readout is put outside, similar to a conventional reservoir. The output of this network is calculated at the readout as a linear combination of the filtered average firing rates of the excitatory neurons in the modules. Feedback connections that provide output back to the excitatory neurons in the modules with random strength are also added to this network. This network typically shows transitive chaotic synchronization, in which synchronizing modules are rearranged chaotically and intermittently. Under such conditions, our E-I network is trained to generate sinusoidal periodic signals for simplicity with force learning. When adjusting the E-I activity, it is observed that the efficiency of force learning is maximized at an optimal E-I balance near an edge of chaos. These results imply that the cooperation of excitatory and inhibitory neurons is required when force learning works effectively in the brain, although usual reservoir networks don't distinguish these two kinds of neurons.
- New
- Research Article
- 10.1162/neco.a.1504
- Mar 17, 2026
- Neural computation
- Matteo Saponati + 1 more
Anticipating future events is a key computational task for neuronal networks. Experimental evidence suggests that reliable temporal sequences in neural activity play a functional role in the association and anticipation of events in time. However, how neurons can differentiate and anticipate multiple spike sequences remains largely unknown. We implement a learning rule based on predictive processing, where neurons exclusively fire for the initial, unpredictable inputs in a spiking sequence, leading to an efficient representation with reduced postsynaptic firing. Combining this mechanism with inhibitory feedback leads to sparse firing in the network, enabling neurons to selectively anticipate different sequences in the input. We demonstrate that intermediate levels of inhibition are optimal to decorrelate neuronal activity and to enable the prediction of future inputs. Notably, each sequence is independently encoded in the sparse, anticipatory firing of the network. Overall, our results demonstrate that the interplay of self-supervised predictive learning rules and inhibitory feedback enables fast and efficient classification of different input sequences.
- New
- Research Article
- 10.1162/neco.a.1507
- Mar 17, 2026
- Neural computation
- Richard W Prager + 1 more
This article explores how simple reinforcement learning algorithms might be implemented by the anatomy of the cerebellum. In doing this, we highlight which anatomical and physiological details are most important for assessing algorithmic fit, and we discuss which algorithm components are easiest to accommodate in a neural system. We describe hypothetical cerebellar implementations of four reinforcement learning algorithms and discuss the anatomical plausibility of the various components required. We show how one of the algorithms can learn to generate short sequences of actions without continuous information on the resulting changes to the environment. We finish with simulations that illustrate the way that the algorithms learn to solve the problem of balancing an inverted pendulum, commonly known as the cart-pole problem. We highlight two physiological features: reward signals and combining information across time, that indicate that some sort of reinforcement learning adaptation may be taking place. We also describe why the commonly used algorithmic feature, an eligibility trace, presents particular problems to implement in known neural anatomy.
- Research Article
- 10.1162/neco.a.1506
- Mar 5, 2026
- Neural Computation
- Peter Neri
Abstract Sensory operators are classically modeled using small circuits involving canonical computations, such as energy extraction and gain control. Notwithstanding their utility, circuit models do not provide a unified framework encompassing the variety of effects observed experimentally. We develop a novel, alternative framework that recasts sensory operators in the language of intrinsic geometry. We start from a plausible representation of perceptual processes that is akin to measuring distances over a sensory manifold. We show that this representation is sufficiently expressive to capture a wide range of empirical effects associated with elementary sensory computations. The resulting geometrical framework offers a new perspective on state-of-the-art empirical descriptors of sensory behavior, such as first-order and second-order perceptual kernels. For example, it relates these descriptors to notions of flatness and curvature in perceptual space.
- Research Article
- 10.1162/neco.a.1501
- Mar 5, 2026
- Neural computation
- Xiang Zhang + 4 more
Quantifying similarity between population spike patterns is essential for understanding how neural dynamics encode information. Traditional approaches, which combine kernel smoothing, principal component analysis, and canonical correlation analysis (CCA), have limitations: smoothing kernel bandwidths are often empirically chosen, CCA maximizes alignment between patterns without considering the variance explained within patterns, and baseline correlations from stochastic spiking are rarely corrected. We introduce ReBaCCA-ss (relevance-balanced continuum correlation analysis with smoothing and surrogating), a novel framework that addresses these challenges through three innovations: (1) balancing alignment and variance explanation via continuum canonical correlation, (2) correcting for noise using surrogate spike trains, and (3) selecting the optimal kernel bandwidth by maximizing the difference between true and surrogate correlations. ReBaCCA-ss is validated on both simulated data and hippocampal recordings from rats performing a delayed nonmatch-to-sample task. It reliably identifies spatiotemporal similarities between spike patterns. Combined with multidimensional scaling, ReBaCCA-ss reveals structured neural representations across trials, events, sessions, and animals, offering a powerful tool for neural population analysis.
- Research Article
- 10.1162/neco.a.1492
- Feb 2, 2026
- Neural computation
- Juliana Londono Alvarez + 2 more
Neural circuits in the brain perform a variety of essential functions, including input classification, pattern completion, and the generation of rhythms and oscillations that support functions such as breathing and locomotion. There is also substantial evidence that the brain encodes memories and processes information via sequences of neural activity. Traditionally, rhythmic activity and pattern generation have been modeled using coupled oscillators, whereas input classification and pattern completion have been modeled using attractor neural networks. Here, we present a theoretical framework that demonstrates how attractor-based networks can also generate diverse rhythmic patterns, such as those of central pattern generator circuits (CPGs). Additionally, we propose a mechanism for transitioning between patterns. Specifically, we construct a network that can step through a sequence of five different quadruped gaits. It is composed of two dynamically distinct modules: a "counter" network that can count the number of external inputs it receives via a sequence of fixed points and a locomotion network that encodes five different quadruped gaits as limit cycles. A sequence of locomotive gaits is obtained by connecting the counter network with the locomotion network. Specifically, we introduce a new architecture for layering networks that produces fusion attractors, binding pairs of attractors from individual layers. All of this is accomplished within a unified framework of attractor-based models using threshold-linear networks.
- Research Article
- 10.1162/neco.a.1483
- Feb 2, 2026
- Neural computation
- Faris B Rustom + 3 more
Object detection and recognition are fundamental functions that play a significant role in the success of species. Because the appearance of an object exhibits large variability, the brain has to group these different stimuli under the same object identity, a process of generalization. Does the process of generalization follow some general principles, or is it an ad hoc bag of tricks? The universal law of generalization (ULoG) provides evidence that generalization follows similar properties across a variety of species and tasks. Here, we tested the hypothesis derived from ULoG that the internal representations underlying generalization reflect the natural properties of object detection and recognition in our environment rather than the specifics of the system solving these problems. Neural networks with universal-approximation capability have been successful in many object detection and recognition tasks; however, how these networks reach their decisions remains opaque. To provide a strong test for ecological validity, we used natural camouflage, which is nature's test bed for object detection and recognition. We trained a deep neural network with natural images of "clear" and "camouflaged" animals and examined the emerging internal representations. We extended ULoG to a realistic learning regime, with multiple consequential stimuli, and developed two methods to determine category prototypes. Our results show that with a proper choice of category prototypes, the generalization functions are monotone decreasing, similar to the generalization functions of biological systems. Critically, we show that camouflaged inputs are not represented randomly but rather systematically appear at the tail of the monotone decreasing functions. Our results support the hypothesis that the internal representations underlying generalization in object detection and recognition are shaped mainly by the properties of the ecological environment, even though different biological and artificial systems may generate these internal representations through drastically different learning and adaptation processes. Furthermore, the extended version of ULoG provides a tool to analyze how the system organizes its internal representations during learning as well as how it makes its decisions.
- Research Article
- 10.1162/neco.a.1489
- Feb 2, 2026
- Neural computation
- Ben Tsuda + 4 more
Neuromodulators are critical controllers of neural states, with dysfunctions linked to various neuropsychiatric disorders. Although many biological aspects of neuromodulation have been studied, the computational principles underlying how neuromodulation of distributed neural populations controls brain states remain unclear. In contrast to external contextual inputs, neuromodulation can act as a single scalar signal that is broadcast to a vast population of neurons. We model the modulation of synaptic weight in a recurrent neural network model and show that neuromodulators can dramatically alter the function of a network, even when highly simplified. We find that under structural constraints like those in brains, this provides a fundamental mechanism that can increase the computational capability and flexibility of a neural network. Diffuse synaptic weight modulation enables storage of multiple memories using a common set of synapses that are able to generate diverse, even diametrically opposed, behaviors. Our findings help explain how neuromodulators unlock specific behaviors by creating task-specific hyperchannels in neural activity space and motivate more flexible, compact and capable machine learning architectures.