Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • New
  • Research Article
  • 10.1088/2634-4386/ae4a47
The more the merrier: running multiple neuromorphic components on-chip for robotic control
  • Apr 8, 2026
  • Neuromorphic Computing and Engineering
  • Evan Eames + 11 more

  • New
  • Open Access Icon
  • Research Article
  • 10.1088/2634-4386/ae5128
Line-based event preprocessing: towards low-energy neuromorphic computer vision
  • Apr 7, 2026
  • Neuromorphic Computing and Engineering
  • Amélie Gruel + 3 more

Abstract Neuromorphic vision made significant progress in recent years, thanks to the natural match between spiking neural networks and event data in terms of biological inspiration, energy savings, latency and memory use for dynamic visual data processing. However, optimising its energy requirements still remains a challenge within the community, especially for embedded applications. One solution may reside in preprocessing events to optimise data quantity thus lowering the energy cost on neuromorphic hardware, proportional to the number of synaptic operations. To this end, we extend an end-to-end neuromorphic line detection mechanism to introduce line-based event data preprocessing. Our results demonstrate on three benchmark event-based datasets that preprocessing leads to an advantageous trade-off between energy consumption and classification performance. Depending on the line-based preprocessing strategy and the complexity of the classification task, we show that one can maintain or increase the classification accuracy while significantly reducing the theoretical energy consumption. Our approach systematically leads to a significant improvement of the neuromorphic classification efficiency, thus laying the groundwork towards a more frugal neuromorphic computer vision thanks to event preprocessing.

  • New
  • Open Access Icon
  • Research Article
  • 10.1088/2634-4386/ae573b
Bruno: Backpropagation Running Undersampled for Novel device Optimization
  • Mar 25, 2026
  • Neuromorphic Computing and Engineering
  • Luca Fehlings + 5 more

Abstract Recent efforts to improve the efficiency of neuromorphic and machine learning systems have centred on developing of specialised hardware for neural networks. These systems typically feature architectures that go beyond the von Neumann model employed in general-purpose hardware such as GPUs, offering potential efficiency and performance gains. However, neural networks developed for specialised hardware must consider its specific characteristics. This requires novel training algorithms and accurate hardware models, since they cannot be abstracted as a general-purpose computing platform. In this work, we present a bottom-up approach to training neural networks for hardware-based spiking neurons and synapses, built using ferroelectric capacitors (FeCAPs) and resistive random-access memories (RRAMs), respectively. Unlike the common approach of designing hardware to fit abstract neuron or synapse models, we start with compact models of the physical device to model the computational primitives. Based on these models, we have developed a training algorithm (BRUNO) that can reliably train the networks, even when applying hardware limitations, such as stochasticity or low bit precision. We analyse and compare BRUNO with Backpropagation Through Time. We test it on different spatio-temporal datasets. First on a music prediction dataset, where a network composed of ferroelectric leaky integrate-and-fire (FeLIF) neurons is used to predict at each time step the next musical note that should be played. The second dataset consists on the classification of the Braille letters using a network composed of quantised RRAM synapses and FeLIF neurons. The performance of this network is then compared with that of networks composed of LIF neurons. Experimental results show the potential advantages of using BRUNO by reducing the time and memory required to detect spatio-temporal patterns with quantised synapses.

  • Open Access Icon
  • Research Article
  • 10.1088/2634-4386/ae5088
More than MACs: exploring the role of neuromorphic engineering in the age of LLMs
  • Mar 11, 2026
  • Neuromorphic Computing and Engineering
  • Wilkie Olin-Ammentorp

Abstract The introduction of large language models has significantly expanded global demand for computing; addressing this growing demand requires novel approaches that introduce new capabilities while addressing extant needs. Although inspiration from biological systems served as the foundation on which modern artificial intelligence (AI) was developed, many modern advances have been made without clear parallels to biological computing. As a result, the ability of techniques inspired by ``natural intelligence'' (NI) to inflect modern AI systems may be questioned. However, by analyzing remaining disparities between AI and NI, we argue that further biological inspiration can contribute towards expanding the capabilities of artificial systems, enabling them to succeed in real-world environments and adapt to niche applications. To elucidate which NI mechanisms can contribute toward this goal, we review and compare elements of biological and artificial computing systems, emphasizing areas of NI that have not yet been effectively captured by AI. We then suggest areas of opportunity for NI-inspired mechanisms that can inflect AI hardware and software.

  • Open Access Icon
  • Research Article
  • 10.1088/2634-4386/ae46d4
A scalable hybrid training approach for recurrent spiking neural networks
  • Mar 1, 2026
  • Neuromorphic Computing and Engineering
  • Maximilian Baronig + 3 more

Abstract Recurrent spiking neural networks (RSNNs) can be implemented very efficiently in neuromorphic systems. Nevertheless, training of these models with powerful gradient-based learning algorithms is mostly performed on standard digital hardware using Backpropagation through time (BPTT). However, BPTT has substantial limitations. It does not permit online training and its memory consumption scales linearly with the number of computation steps. In contrast, learning methods using forward propagation of gradients operate in an online manner with a memory consumption independent of the number of time steps. These methods enable SNNs to learn from continuous, infinite-length input sequences. In addition, approximate forward propagation algorithms have been developed that can be implemented on neuromorphic hardware. Yet, slow execution speed on conventional hardware as well as inferior performance has hindered their widespread application. In this work, we introduce HYbrid PRopagation (HYPR) that combines the efficiency of parallelization with approximate online forward learning. Our algorithm yields high-throughput online learning through parallelization, paired with constant, i.e., sequence length independent, memory demands. HYPR enables parallelization of parameter update computation over subsequences for RSNNs consisting of almost arbitrary non-linear spiking neuron models. We apply HYPR to networks of spiking neurons with oscillatory subthreshold dynamics. We find that this type of neuron model is particularly well trainable by HYPR, resulting in an unprecedentedly low task performance gap between approximate forward gradient learning and BPTT.

  • Open Access Icon
  • Discussion
  • 10.1088/2634-4386/ae4d7f
Toward a multiscale theoretical framework for organic memristive materials
  • Mar 1, 2026
  • Neuromorphic Computing and Engineering
  • Salvador Cardona-Serra

  • Open Access Icon
  • Research Article
  • 10.1088/2634-4386/ae4cc5
Symbol detection in a MIMO wireless communication system using a FeFET-coupled CMOS ring oscillator array
  • Mar 1, 2026
  • Neuromorphic Computing and Engineering
  • Harsh Kumar Jadia + 13 more

Abstract Symbol decoding in multiple-input multiple-output (MIMO) wireless communication systems requires the deployment of fast, energy-efficient computing hardware deployable at the edge. The brute-force and exact maximum likelihood (ML) decoder, solved on conventional classical digital hardware to decode MIMO symbols, has exponential time complexity. Approximate classical solvers implemented on the same hardware have polynomial time complexity at the best. In this article, we design an alternative ring-oscillator-based coupled oscillator array (also known as oscillatory neural network (ONN)) to act as an oscillator Ising machine (OIM) and heuristically solve the ML-based MIMO detection problem. Complementary metal oxide semiconductor (CMOS) technology is used to design the ring oscillators, and ferroelectric field effect transistor (FeFET) technology is chosen as the non-volatile memory (NVM) coupling element (X) between the oscillators in this CMOS + X OIM design. For this purpose, we experimentally report high linear range of conductance variation (1 µS to 60 µS) with programming voltage pulses in a HfO 2 -based FeFET device fabricated at 28 nm high-K/ metal gate (HKMG) CMOS technology node. We incorporate the conductance modulation characteristic in SPICE simulation of the ring oscillators connected in an all-to-all fashion through a crossbar array of these FeFET devices. We show that the above range of conductance variation of FeFET is suitable to obtain best OIM performance, thereby making FeFET a suitable NVM device for this application. Our SPICE simulations show that there is no significant performance drop for symbol detection up to MIMO array sizes of 90 transmitting and 90 receiving antennas. Our simulations, combined with analytical treatment using Kuramoto model of oscillators, predict that this designed classical analog OIM, if implemented experimentally, will offer logarithmic scaling of computation time with MIMO size, thereby offering huge improvement (in terms of computation speed) over exact and approximate classical solvers run on conventional digital hardware.

  • Open Access Icon
  • Research Article
  • 10.1088/2634-4386/ae4535
A flexible framework for structural plasticity in GPU-accelerated sparse spiking neural networks
  • Mar 1, 2026
  • Neuromorphic Computing and Engineering
  • James C Knight + 2 more

  • Research Article
  • 10.1088/2634-4386/ae5380
Dynamical systems foundations for neuromorphic intelligence
  • Mar 1, 2026
  • Neuromorphic Computing and Engineering
  • Marcel Van Gerven

  • Open Access Icon
  • Research Article
  • 10.1088/2634-4386/ae4f1e
Hyperdimensional decoding of spiking neural networks
  • Mar 1, 2026
  • Neuromorphic Computing and Engineering
  • Cedrick Kinavuidi + 2 more

Abstract This work presents a novel spiking neural network (SNN) decoding method, combining SNNs with hyperdimensional computing (HDC). This decoding method is designed to achieve high accuracy, high noise robustness, low inference latency and low energy consumption. Compared to analogous architectures decoded with existing approaches, the SNN-HDC model attains generally better classification accuracy, lower inference latency, lower spike count and lower estimated energy consumption on multiple test cases from the literature. The SNN-HDC achieved spike count reductions of 1.74 × to 3.36 × on the DvsGesture dataset and 1.36 × to 2.70 × on the SL-Animals-DVS dataset. The SNN-HDC achieved estimated energy consumption reductions of 1.24 × to 3.67 × on the DvsGesture dataset and 1.38 × to 2.27 × on the SL-Animals-DVS dataset. The proposed decoding method enables detection of classes unseen during training. On the DvsGesture dataset, the SNN-HDC model can detect 100% of samples from an unseen/untrained class. The findings suggest the proposed decoding method is a compelling alternative to both rate and latency decoding.