Discovery Logo
Sign In
Search
Paper
Search Paper
R Discovery for Libraries Pricing Sign In
  • Home iconHome
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link
Discovery Logo menuClose menu
  • Home iconHome
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Literature Review iconLiterature Review NEW
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
  • Paperpal iconPaperpal
    External link
  • Mind the Graph iconMind the Graph
    External link
  • Journal Finder iconJournal Finder
    External link
features
  • Audio Papers iconAudio Papers
  • Paper Translation iconPaper Translation
  • Chrome Extension iconChrome Extension
Content Type
  • Journal Articles iconJournal Articles
  • Conference Papers iconConference Papers
  • Preprints iconPreprints
More
  • R Discovery for Libraries iconR Discovery for Libraries
  • Research Areas iconResearch Areas
  • Topics iconTopics
  • Resources iconResources

Related Topics

  • Networks Of Spiking Neurons
  • Networks Of Spiking Neurons

Articles published on Spiking Neural Networks

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
8387 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1016/j.neunet.2025.108350
Towards efficient and accurate spiking neural networks via adaptive bit allocation.
  • Apr 1, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Yao Xingting + 6 more

Towards efficient and accurate spiking neural networks via adaptive bit allocation.

  • New
  • Research Article
  • 10.1016/j.neunet.2025.108371
Predictive coding with spiking neural networks: A survey.
  • Apr 1, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Antony W N'Dri + 6 more

Predictive coding with spiking neural networks: A survey.

  • New
  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.neunet.2025.108343
Proactive and privacy-Preserving defense for DNS over HTTPS via federated AI attestation (PAFA-DoH).
  • Apr 1, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Basharat Ali + 1 more

Proactive and privacy-Preserving defense for DNS over HTTPS via federated AI attestation (PAFA-DoH).

  • New
  • Research Article
  • 10.1016/j.knosys.2026.115541
HASNN: Hierarchical attention spiking neural network for dynamic graph representation learning
  • Apr 1, 2026
  • Knowledge-Based Systems
  • Yanglan Gan + 4 more

HASNN: Hierarchical attention spiking neural network for dynamic graph representation learning

  • New
  • Research Article
  • 10.1016/j.csi.2025.104126
Deep convolutional spiking neural network and block chain based intrusion detection framework for enhancing privacy and security in cloud computing environment
  • Apr 1, 2026
  • Computer Standards & Interfaces
  • B Muthusenthil + 1 more

Deep convolutional spiking neural network and block chain based intrusion detection framework for enhancing privacy and security in cloud computing environment

  • Research Article
  • 10.1109/tnnls.2026.3671461
Spatiotemporal Decoupled Learning for Spiking Neural Networks.
  • Mar 13, 2026
  • IEEE transactions on neural networks and learning systems
  • Chenxiang Ma + 3 more

Spiking neural networks (SNNs) have gained significant attention for their potential to enable energy-efficient artificial intelligence (AI). However, effective and efficient training of SNNs remains an unresolved challenge. While backpropagation through time (BPTT) achieves high accuracy, it incurs substantial memory overhead. In contrast, biologically plausible local learning methods are more memory-efficient but struggle to match the accuracy of BPTT. To bridge this gap, we propose spatiotemporal decoupled learning (STDL), a novel training framework that decouples the spatial and temporal dependencies to achieve both high accuracy and training efficiency for SNNs. Specifically, to achieve spatial decoupling, STDL partitions the network into smaller subnetworks, each of which is trained independently using an auxiliary network. To address the decreased synergy among subnetworks resulting from spatial decoupling, STDL constructs each subnetwork's auxiliary network by selecting the largest subset of layers from its subsequent network layers under a memory constraint. Furthermore, STDL decouples dependencies across time steps to enable efficient online learning. Extensive evaluations on seven static and event-based vision datasets demonstrate that STDL consistently outperforms local learning methods and achieves comparable accuracy to the BPTT method with considerably reduced GPU memory cost. Notably, STDL achieves $4\times $ reduced GPU memory than BPTT on the ImageNet dataset. Therefore, this work opens up a promising avenue for memory-efficient SNN training. Code is available at https://github.com/ChenxiangMA/STDL.

  • Research Article
  • 10.1038/s41598-026-43529-1
A spiking neural network inspired by neuroscience and psychology for Western mode- and key-conditioned music learning and composition.
  • Mar 10, 2026
  • Scientific reports
  • Qian Liang + 2 more

Musical mode is a fundamental element of tonal music, structuring pitch organization and shaping tonal relationships. Existing artificial intelligence approaches to symbolic music generation often rely on rigid alignment strategies and simplified tonal representations, limiting their ability to capture the diversity of musical modes, in contrast to the complex perceptual and learning mechanisms observed in human listeners. In this paper, we propose a brain-inspired spiking neural network that integrates biologically grounded mechanisms with symbolic music theory to represent and learn musical modes and keys. The model comprises multiple interacting subsystems inspired by the functional organization of relevant brain regions, and incorporates neural circuit evolution and spike-timing-dependent plasticity to support mode- and key-conditioned music learning and generation. Experimental results show that the synaptic connectivity patterns emerging in the proposed network exhibit strong alignment with the Krumhansl-Schmuckler key profiles, a well-established model of tonal perception in music psychology. Additionally, quantitative evaluations show that the generated musical pieces preserve tonal characteristics while maintaining melodic diversity. By integrating insights from neuroscience, music psychology, and music theory within a spiking neural network framework, this work provides an interpretable and biologically inspired approach to symbolic music learning and generation.

  • Research Article
  • 10.1016/j.neunet.2026.108809
Fast agreement-driven device-calibrated local learning paradigms for spiking neural networks.
  • Mar 10, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Saptarshi Bej + 5 more

Fast agreement-driven device-calibrated local learning paradigms for spiking neural networks.

  • Research Article
  • 10.1371/journal.pone.0341052
Research on electromagnetic compatibility analysis of automation equipment based on generative adversarial networks and pulse sparse convolution
  • Mar 10, 2026
  • PLOS One
  • Wenrui Ding + 1 more

Electromagnetic interference (EMI) analysis in high-speed industrial systems is increasingly challenged by multi-gigahertz sampling rates, complex transient behaviors, and stringent real-time constraints. To address these challenges, this paper proposes a pulse-aware generative and analysis framework based on a generative adversarial network (GAN) combined with pulse sparse convolution using leaky integrate-and-fire (LIF) spiking neurons. A multi-scale discriminator and gradient penalty stabilization are employed to improve waveform generation fidelity, achieving a Fréchet distance (FID) of 0.72 and a global difference metric (GDM) of 0.18 ± 0.03 on an industrial-grade Electromagnetic compatibility (EMC) dataset. The proposed framework is further applied to crosstalk prediction, where it reduces pulse-width and phase prediction errors by more than 40% compared with classical numerical solvers such as finite-difference time-domain (FDTD), finite element method (FEM), and method of moments (MoM), and consistently outperforms representative learning-based EMC models. To enable real-time deployment, the pulse sparse convolution architecture is implemented on an field-programmable gate array (FPGA) platform using fixed-point arithmetic, achieving deterministic inference at 5 GS/s with a measured power consumption of 0.71 W. Extensive experiments on traction systems, industrial robots, CNC drives, photovoltaic inverters, and UAV (Unmanned Aerial Vehicle) electronics demonstrate that the proposed approach provides accurate, stable, and energy-efficient EMI analysis suitable for practical industrial EMC applications.

  • Research Article
  • 10.1088/2632-2153/ae4a85
Anomaly detection with spiking neural networks for LHC physics
  • Mar 9, 2026
  • Machine Learning: Science and Technology
  • Barry M Dillon + 2 more

Abstract Anomaly detection offers a promising strategy for discovering new physics at the Large Hadron Collider (LHC). This paper investigates autoencoders (AEs) built using neuromorphic spiking neural networks (SNNs) for this purpose. One key application is at the trigger level, where anomaly detection tools could capture signals that would otherwise be discarded by conventional selection cuts. These systems must operate under strict latency and computational constraints. SNNs are inherently well-suited for low-latency, low-memory, real-time inference, particularly on field-programmable gate arrays. Further gains are expected with the rapid progress in dedicated neuromorphic hardware development. Using the CMS ADC2021 dataset, we design and evaluate a simple SNN AE architecture. Our results show that the SNN AEs are competitive with conventional AEs for LHC anomaly detection across all signal models.

  • Research Article
  • 10.1007/s11220-026-00744-4
Integrating Neuroscientific Priors into Spiking Neural Networks: ECSNN-SEG for Robust Brain ECS Segmentation from Low-SNR Cryo-electron Microscopy Data
  • Mar 9, 2026
  • Sensing and Imaging
  • Chao Zhang + 8 more

Integrating Neuroscientific Priors into Spiking Neural Networks: ECSNN-SEG for Robust Brain ECS Segmentation from Low-SNR Cryo-electron Microscopy Data

  • Research Article
  • 10.1038/s41598-026-42970-6
PS-SNN: pattern separation learning for expandable spiking neural networks in class-incremental learning.
  • Mar 9, 2026
  • Scientific reports
  • Ke Hu + 3 more

Biological brains mitigate interference by orthogonalizing neural representations of similar memories, thereby preserving stability across tasks in continual learning. However, most existing continual learning approaches for spiking neural networks (SNNs) adopt randomly initialized classifier heads at each step and optimize them with imbalanced data, which often induces representation drift and undermines model stability. In this work, we revisit the role of the classifier head in the continual learning paradigm and propose a pattern separation learning strategy for expandable SNNs in class-incremental learning (CIL). Specifically, we predefine fixed and mutually orthogonal class centers for each class to replace the conventional learnable classifiers, providing stable optimization targets that prevent feature space conflicts and reduce interference between tasks. Combined with dynamically expandable structures that emulate neurogenesis to enhance plasticity, our approach effectively mitigates catastrophic forgetting while maintaining adaptability to novel tasks. Experimental results show that our PS-SNN achieves an average incremental accuracy of 76.42% on the CIFAR100-B0 benchmark over 10 incremental steps. PS-SNN not only surpasses state-of-the-art SNN-based continual learning algorithms but also matches the performance of DNN-based methods, highlighting the potential of integrating biologically inspired pattern separation into neuromorphic computing systems.

  • Research Article
  • 10.1016/j.neuropharm.2026.110911
A novel rat model harboring two BDNF gene mutations exhibiting autism-like behaviors and cognitive impairments.
  • Mar 5, 2026
  • Neuropharmacology
  • Zeping Xue + 6 more

A novel rat model harboring two BDNF gene mutations exhibiting autism-like behaviors and cognitive impairments.

  • Research Article
  • 10.1016/j.neunet.2025.108190
Spatially-enhanced Spiking neural network for efficient point cloud analysis.
  • Mar 1, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Yijie Lu + 5 more

Spatially-enhanced Spiking neural network for efficient point cloud analysis.

  • Research Article
  • 10.1016/j.neunet.2025.108253
Efficient speech command recognition leveraging spiking neural networks and progressive time-scaled curriculum distillation.
  • Mar 1, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Jiaqi Wang + 9 more

Efficient speech command recognition leveraging spiking neural networks and progressive time-scaled curriculum distillation.

  • Research Article
  • 10.1016/j.neunet.2025.108210
LAMSNN: Learnable adaptive modulation for artifact suppression in spiking underwater image enhancement networks.
  • Mar 1, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Jinxin Shao + 2 more

LAMSNN: Learnable adaptive modulation for artifact suppression in spiking underwater image enhancement networks.

  • Research Article
  • 10.1016/j.eswa.2025.129977
Spiking Depth: Depth estimation from sparse events with spiking neural networks
  • Mar 1, 2026
  • Expert Systems with Applications
  • Dongze Liu + 4 more

Spiking Depth: Depth estimation from sparse events with spiking neural networks

  • Research Article
  • Cite Count Icon 3
  • 10.1109/tnnls.2025.3615971
IML-Spikeformer: Input-Aware Multilevel Spiking Transformer for Speech Processing.
  • Mar 1, 2026
  • IEEE transactions on neural networks and learning systems
  • Zeyang Song + 4 more

Spiking neural networks (SNNs), inspired by biological neural mechanisms, represent a promising neuromorphic computing paradigm that offers energy-efficient alternatives to traditional artificial neural networks (ANNs). Despite proven effectiveness, SNN architectures have struggled to achieve competitive performance on large-scale speech processing tasks. Two key challenges hinder progress: 1) the high computational overhead during training caused by multitimestep spike firing and 2) the absence of large-scale SNN architectures tailored to speech processing tasks. To overcome the issues, we introduce the input-aware multilevel spikeformer (IML-Spikeformer), a spiking transformer architecture specifically designed for large-scale speech processing. Central to our design is the input-aware multilevel spike (IMLS) mechanism, which simulates multitimestep spike firing within a single timestep using an adaptive, input-aware thresholding scheme. IML-Spikeformer further integrates a reparameterized spiking self-attention (RepSSA) module with a hierarchical decay mask (HDM), forming the HD-RepSSA module. This module enhances the precision of attention maps and enables modeling of multiscale temporal dependencies in speech signals. Experiments demonstrate that IML-Spikeformer achieves word error rates (WERs) of 6.0% on AiShell-1 and 3.4% on Librispeech-960, comparable to conventional ANN transformers while reducing theoretical inference energy consumption by $4.64\times $ and $4.32\times $ , respectively. IML-Spikeformer marks an advance of scalable SNN architectures for large-scale speech processing in both task performance and energy efficiency. Our source code and model checkpoints are publicly available at github.com/Pooookeman/IML-Spikeformer.

  • Research Article
  • 10.1016/j.neunet.2025.108239
BioMotion-SNN: Spiking neural network modeling for visual motion processing.
  • Mar 1, 2026
  • Neural networks : the official journal of the International Neural Network Society
  • Ying Liu + 5 more

BioMotion-SNN: Spiking neural network modeling for visual motion processing.

  • Research Article
  • 10.1088/2634-4386/ae46d4
A scalable hybrid training approach for recurrent spiking neural networks
  • Mar 1, 2026
  • Neuromorphic Computing and Engineering
  • Maximilian Baronig + 3 more

Abstract Recurrent spiking neural networks (RSNNs) can be implemented very efficiently in neuromorphic systems. Nevertheless, training of these models with powerful gradient-based learning algorithms is mostly performed on standard digital hardware using Backpropagation through time (BPTT). However, BPTT has substantial limitations. It does not permit online training and its memory consumption scales linearly with the number of computation steps. In contrast, learning methods using forward propagation of gradients operate in an online manner with a memory consumption independent of the number of time steps. These methods enable SNNs to learn from continuous, infinite-length input sequences. In addition, approximate forward propagation algorithms have been developed that can be implemented on neuromorphic hardware. Yet, slow execution speed on conventional hardware as well as inferior performance has hindered their widespread application. In this work, we introduce HYbrid PRopagation (HYPR) that combines the efficiency of parallelization with approximate online forward learning. Our algorithm yields high-throughput online learning through parallelization, paired with constant, i.e., sequence length independent, memory demands. HYPR enables parallelization of parameter update computation over subsequences for RSNNs consisting of almost arbitrary non-linear spiking neuron models. We apply HYPR to networks of spiking neurons with oscillatory subthreshold dynamics. We find that this type of neuron model is particularly well trainable by HYPR, resulting in an unprecedentedly low task performance gap between approximate forward gradient learning and BPTT.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers