An Intrinsically Knowledge-Transferring Developmental Spiking Neural Network for Tactile Classification.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Gradient descent, computed through backpropagation (BP), has been widely used to train spiking neural networks (SNNs). However, the approach has several limitations. It requires manual intervention to tune the network architecture, is prone to catastrophic forgetting of previously learned information when exposed to data containing new information, and is computationally demanding. To address these issues, we propose brain-mimetic developmental spiking neural networks (BDNNs), which emulate the postnatal development of biological neural circuits. We evaluated BDNNs using a neuromorphic tactile system with the task of classifying objects through grasping. Our findings show that BDNNs grow dynamically in response to input data by incrementally recruiting hidden neurons, leading to steadily increasing classification accuracy without the need for manual architecture tuning. The growth process adapts autonomously to the complexity of incoming data. BDNNs also exhibit strong knowledge transfer capabilities, which effectively leverage previously learned knowledge about grasping objects to incrementally learn about new objects. Furthermore, in comparative experiments using the same dataset and hardware, BDNNs achieved classification performance comparable to the standard BP-based method and its variants, while learning one to three orders of magnitude faster. Furthermore, the BDNN outperforms existing continual learning algorithms in the performance and speed. These results highlight BDNNs as a promising approach for continual learning and real-time edge computing applications. The source code of our work is publicly available at https://github.com/1jiaqixing/BDNNversion1.

Similar Papers
  • Book Chapter
  • Cite Count Icon 9
  • 10.1007/978-3-319-44781-0_43
Sound Recognition System Using Spiking and MLP Neural Networks
  • Jan 1, 2016
  • Elena Cerezuela-Escudero + 5 more

In this paper, we explore the capabilities of a sound classification system that combines a Neuromorphic Auditory System for feature extraction and an artificial neural network for classification. Two models of neural network have been used: Multilayer Perceptron Neural Network and Spiking Neural Network. To compare their accuracies, both networks have been developed and trained to recognize pure tones in presence of white noise. The spiking neural network has been implemented in a FPGA device. The neuromorphic auditory system that is used in this work produces a form of representation that is analogous to the spike outputs of the biological cochlea. Both systems are able to distinguish the different sounds even in the presence of white noise. The recognition system based in a spiking neural networks has better accuracy, above 91 %, even when the sound has white noise with the same power.

  • Research Article
  • Cite Count Icon 26
  • 10.1109/tcad.2022.3179246
The Implementation and Optimization of Neuromorphic Hardware for Supporting Spiking Neural Networks With MLP and CNN Topologies
  • Feb 1, 2023
  • IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
  • Wujian Ye + 2 more

Spiking neural network (SNN) has attracted extensive attention in large-scale image processing tasks. To obtain higher computing efficiency, the development of hardware architecture suitable for SNN computing has become a hot research topic. However, the existing hardware of spike neurons still has high computational complexity and they do not perform well enough on complicated datasets, and the neuromorphic system cannot support SNNs with different convolutional topologies, resulting in low efficiency of the system. To address the above problems, an optimized leaky integrated-and-fire (LIF) neuron called EPC-LIF and a neuromorphic hardware acceleration system (ELIF-NHAS) are designed and implemented based on the field-programmable gate array (Xilinx Kintex-7). First, the classical LIF neuron is designed using the optimization method of extended prediction correction (EPC), which can reduce the computation complexity and hardware resources with a maximum frequency of 439.95 MHz. The ELIF-NHAS is constructed and optimized with parallel and pipeline techniques for effectively running SNNs, working with a maximum frequency of 135.6 MHz. Then, the genetic algorithm is applied to adjust the membrane threshold of neurons for further improving the accuracy of SNNs. Furthermore, the ELIF-NHAS can support different SNNs with multilayer perceptron and convolutional neural network topologies (called SCNN), including traditional, depth-separate, and residual convolutions. The accuracy of multilayer SCNNs can achieve 99.10%, 90.29%, and 82.15% on MNIST, Fashion-MNIST, and SVHN datasets, respectively; and the speed and energy consumption achieve 1.21 ms/image and 1.19 mJ/image. Compared with existing systems, the ELIF-NHAS is more suitable for the deployment and inference of SNNs with higher speed and lower consumption.

  • Research Article
  • 10.1088/2634-4386/ada08b
Continual learning with hebbian plasticity in sparse and predictive coding networks: a survey and perspective
  • Dec 1, 2024
  • Neuromorphic Computing and Engineering
  • Ali Safa

Recently, the use of bio-inspired learning techniques such as Hebbian learning and its closely-related spike-timing-dependent plasticity (STDP) variant have drawn significant attention for the design of compute-efficient AI systems that can continuously learn on-line at the edge. A key differentiating factor regarding this emerging class of neuromorphic continual learning system lies in the fact that learning must be carried using a data stream received in its natural order, as opposed to conventional gradient-based offline training, where a static training dataset is assumed available a priori and randomly shuffled to make the training set independent and identically distributed (i.i.d). In contrast, the emerging class of neuromorphic CL systems covered in this survey must learn to integrate new information on the fly in a non-i.i.d manner, which makes these systems subject to catastrophic forgetting. In order to build the next generation of neuromorphic AI systems that can continuously learn at the edge, a growing number of research groups are studying the use of sparse and predictive Coding (PC)-based Hebbian neural network architectures and the related spiking neural networks (SNNs) equipped with STDP learning. However, since this research field is still emerging, there is a need for providing a holistic view of the different approaches proposed in the literature so far. To this end, this survey covers a number of recent works in the field of neuromorphic CL based on state-of-the-art sparse and PC technology; provides background theory to help interested researchers quickly learn the key concepts; and discusses important future research questions in light of the different works covered in this paper. It is hoped that this survey will contribute towards future research in the field of neuromorphic CL.

  • Conference Article
  • Cite Count Icon 8
  • 10.1109/ijcnn55064.2022.9892774
Targeted Data Poisoning Attacks Against Continual Learning Neural Networks
  • Jul 18, 2022
  • Huayu Li + 1 more

Continual (incremental) learning approaches are designed to address catastrophic forgetting in neural networks by training on batches or streaming data over time. In many real-world scenarios, the environments that generate streaming data are exposed to untrusted sources. These untrusted sources can be exposed to data poisoned by an adversary. The adversaries can manipulate and inject malicious samples into the training data. Thus, the untrusted data sources and malicious samples are meant to expose the vulnerabilities of neural networks that can lead to serious consequences in applications that require reliable performance. However, recent works on continual learning only focused on adversary agnostic scenarios without considering the possibility of data poisoning attacks. Further, recent work has demonstrated there are vulnerabilities of continual learning approaches in the presence of backdoor attacks with a relaxed constraint on manipulating data. In this paper, we focus on a more general and practical poisoning setting that artificially forces catastrophic forgetting by clean-label data poisoning attacks. We proposed a task targeted data poisoning attack that forces the neural network to forget the previous-learned knowledge, while the attack samples remain stealthy. The approach is benchmarked against three state-of-the-art continual learning algorithms on both domain and task incremental learning scenarios. The experiments demonstrate that the accuracy on targeted tasks significantly drops when the poisoned dataset is used in continual task learning.

  • Research Article
  • Cite Count Icon 19
  • 10.1109/tnnls.2021.3131356
HybridSNN: Combining Bio-Machine Strengths by Boosting Adaptive Spiking Neural Networks.
  • Sep 1, 2023
  • IEEE Transactions on Neural Networks and Learning Systems
  • Jiangrong Shen + 3 more

Spiking neural networks (SNNs), inspired by the neuronal network in the brain, provide biologically relevant and low-power consuming models for information processing. Existing studies either mimic the learning mechanism of brain neural networks as closely as possible, for example, the temporally local learning rule of spike-timing-dependent plasticity (STDP), or apply the gradient descent rule to optimize a multilayer SNN with fixed structure. However, the learning rule used in the former is local and how the real brain might do the global-scale credit assignment is still not clear, which means that those shallow SNNs are robust but deep SNNs are difficult to be trained globally and could not work so well. For the latter, the nondifferentiable problem caused by the discrete spike trains leads to inaccuracy in gradient computing and difficulties in effective deep SNNs. Hence, a hybrid solution is interesting to combine shallow SNNs with an appropriate machine learning (ML) technique not requiring the gradient computing, which is able to provide both energy-saving and high-performance advantages. In this article, we propose a HybridSNN, a deep and strong SNN composed of multiple simple SNNs, in which data-driven greedy optimization is used to build powerful classifiers, avoiding the derivative problem in gradient descent. During the training process, the output features (spikes) of selected weak classifiers are fed back to the pool for the subsequent weak SNN training and selection. This guarantees HybridSNN not only represents the linear combination of simple SNNs, as what regular AdaBoost algorithm generates, but also contains neuron connection information, thus closely resembling the neural networks of a brain. HybridSNN has the benefits of both low power consumption in weak units and overall data-driven optimizing strength. The network structure in HybridSNN is learned from training samples, which is more flexible and effective compared with existing fixed multilayer SNNs. Moreover, the topological tree of HybridSNN resembles the neural system in the brain, where pyramidal neurons receive thousands of synaptic input signals through their dendrites. Experimental results show that the proposed HybridSNN is highly competitive among the state-of-the-art SNNs.

  • Conference Article
  • Cite Count Icon 3
  • 10.1109/cicc57935.2023.10121315
A A 22nm 0.43pJ/SOP Sparsity-Aware In-Memory Neuromorphic Computing System with Hybrid Spiking and Artificial Neural Network and Configurable Topology
  • Apr 1, 2023
  • Ying Liu + 11 more

Spiking neural networks (SNNs) dynamically process complex spatio temporal information as asynchronous and highly sparse spikes with high energy efficiency (EE). However, the training algorithms for nondifferentiable and discrete SNNs are still immature, leading to relatively low accuracy [1]. For instance, abnormal ECG detection is realized by SNN in [2] with 0. 53pJ/SOP EE, but the accuracy is only 90.5%. in [3], the on-chip learning of recurrent SNN for 1 -word keyword spotting (KWS) achieved only 90.7% accuracy. in contrast, artificial neural networks (ANNs) can reach excellent accuracy through gradient-based backpropagation (BP) training but require substantial energy consumption due to their intensive computations and memory accesses. A unified ANN-SNN architecture was proposed in [4] for high accuracy, but it sacrifices EE due to massive data movement and lack of sparsity utilization in SNN.

  • Conference Article
  • Cite Count Icon 3
  • 10.1109/icassp49357.2023.10095984
Is Multi-Task Learning an Upper Bound for Continual Learning?
  • Jun 4, 2023
  • Zihao Wu + 3 more

Continual learning and multi-task learning are commonly used machine learning techniques for learning from multiple tasks. However, existing literature assumes multi-task learning as a reasonable performance upper bound for various continual learning algorithms, without rigorous justification. Additionally, in a multi-task setting, a small subset of tasks may behave as adversarial tasks, negatively impacting overall learning performance. On the other hand, continual learning approaches can avoid the negative impact of adversarial tasks and maintain performance on the remaining tasks, resulting in better performance than multi-task learning. This paper introduces a novel continual self-supervised learning approach, where each task involves learning an invariant representation for a specific class of data augmentations. We demonstrate that this approach results in naturally contradicting tasks and that, in this setting, continual learning often outperforms multi-task learning on benchmark datasets, including MNIST, CIFAR-10, and CIFAR-100.

  • Research Article
  • Cite Count Icon 15
  • 10.1073/pnas.2218173120
Brain-inspired neural circuit evolution for spiking neural networks
  • Sep 20, 2023
  • Proceedings of the National Academy of Sciences of the United States of America
  • Guobin Shen + 3 more

In biological neural systems, different neurons are capable of self-organizing to form different neural circuits for achieving a variety of cognitive functions. However, the current design paradigm of spiking neural networks is based on structures derived from deep learning. Such structures are dominated by feedforward connections without taking into account different types of neurons, which significantly prevent spiking neural networks from realizing their potential on complex tasks. It remains an open challenge to apply the rich dynamical properties of biological neural circuits to model the structure of current spiking neural networks. This paper provides a more biologically plausible evolutionary space by combining feedforward and feedback connections with excitatory and inhibitory neurons. We exploit the local spiking behavior of neurons to adaptively evolve neural circuits such as forward excitation, forward inhibition, feedback inhibition, and lateral inhibition by the local law of spike-timing-dependent plasticity and update the synaptic weights in combination with the global error signals. By using the evolved neural circuits, we construct spiking neural networks for image classification and reinforcement learning tasks. Using the brain-inspired Neural circuit Evolution strategy (NeuEvo) with rich neural circuit types, the evolved spiking neural network greatly enhances capability on perception and reinforcement learning tasks. NeuEvo achieves state-of-the-art performance on CIFAR10, DVS-CIFAR10, DVS-Gesture, and N-Caltech101 datasets and achieves advanced performance on ImageNet. Combined with on-policy and off-policy deep reinforcement learning algorithms, it achieves comparable performance with artificial neural networks. The evolved spiking neural circuits lay the foundation for the evolution of complex networks with functions.

  • Research Article
  • Cite Count Icon 10
  • 10.1038/s41467-024-51110-5
High-performance deep spiking neural networks with 0.3 spikes per neuron
  • Aug 9, 2024
  • Nature Communications
  • Ana Stanojevic + 5 more

Communication by rare, binary spikes is a key factor for the energy efficiency of biological brains. However, it is harder to train biologically-inspired spiking neural networks than artificial neural networks. This is puzzling given that theoretical results provide exact mapping algorithms from artificial to spiking neural networks with time-to-first-spike coding. In this paper we analyze in theory and simulation the learning dynamics of time-to-first-spike-networks and identify a specific instance of the vanishing-or-exploding gradient problem. While two choices of spiking neural network mappings solve this problem at initialization, only the one with a constant slope of the neuron membrane potential at threshold guarantees the equivalence of the training trajectory between spiking and artificial neural networks with rectified linear units. For specific image classification architectures comprising feed-forward dense or convolutional layers, we demonstrate that deep spiking neural network models can be effectively trained from scratch on MNIST and Fashion-MNIST datasets, or fine-tuned on large-scale datasets, such as CIFAR10, CIFAR100 and PLACES365, to achieve the exact same performance as that of artificial neural networks, surpassing previous spiking neural networks. Our approach accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation. We also show that fine-tuning spiking neural networks with our robust gradient descent algorithm enables their optimization for hardware implementations with low latency and resilience to noise and quantization.

  • Research Article
  • Cite Count Icon 6
  • 10.1364/prj.507178
On-chip spiking neural networks based on add-drop ring microresonators and electrically reconfigurable phase-change material photonic switches
  • Apr 1, 2024
  • Photonics Research
  • Qiang Zhang + 7 more

We propose and numerically demonstrate a photonic computing primitive designed for integrated spiking neural networks (SNNs) based on add-drop ring microresonators (ADRMRs) and electrically reconfigurable phase-change material (PCM) photonic switches. In this neuromorphic system, the passive silicon-based ADRMR, equipped with a power-tunable auxiliary light, effectively demonstrates nonlinearity-induced dual neural dynamics encompassing spiking response and synaptic plasticity that can generate single-wavelength optical neural spikes with synaptic weight. By cascading these ADRMRs with different resonant wavelengths, weighted multiple-wavelength spikes can be feasibly output from the ADRMR-based hardware arrays when external wavelength-addressable optical pulses are injected; subsequently, the cumulative power of these weighted output spikes is utilized to ascertain the activation status of the reconfigurable PCM photonic switches. Moreover, the reconfigurable mechanism driving the interconversion of the PCMs between the resonant-bonded crystalline states and the covalent-bonded amorphous states is achieved through precise thermal modulation. Drawing from the thermal properties, an innovative thermodynamic leaky integrate-and-firing (TLIF) neuron system is proposed. With the TLIF neuron system as the fundamental unit, a fully connected SNN is constructed to complete a classic deep learning task: the recognition of handwritten digit patterns. The simulation results reveal that the exemplary SNN can effectively recognize 10 numbers directly in the optical domain by employing the surrogate gradient algorithm. The theoretical verification of our architecture paves a whole new path for integrated photonic SNNs, with the potential to advance the field of neuromorphic photonic systems and enable more efficient spiking information processing.

  • Research Article
  • Cite Count Icon 1
  • 10.1162/neco_a_01702
Trainable Reference Spikes Improve Temporal Information Processing of SNNs With Supervised Learning.
  • Sep 17, 2024
  • Neural computation
  • Zeyuan Wang + 1 more

Spiking neural networks (SNNs) are the next-generation neural networks composed of biologically plausible neurons that communicate through trains of spikes. By modifying the plastic parameters of SNNs, including weights and time delays, SNNs can be trained to perform various AI tasks, although in general not at the same level of performance as typical artificial neural networks (ANNs). One possible solution to improve the performance of SNNs is to consider plastic parameters other than just weights and time delays drawn from the inherent complexity of the neural system of the brain, which may help SNNs improve their information processing ability and achieve brainlike functions. Here, we propose reference spikes as a new type of plastic parameters in a supervised learning scheme in SNNs. A neuron receives reference spikes through synapses providing reference information independent of input to help during learning, whose number of spikes and timings are trainable by error backpropagation. Theoretically, reference spikes improve the temporal information processing of SNNs by modulating the integration of incoming spikes at a detailed level. Through comparative computational experiments, we demonstrate using supervised learning that reference spikes improve the memory capacity of SNNs to map input spike patterns to target output spike patterns and increase classification accuracy on the MNIST, Fashion-MNIST, and SHD data sets, where both input and target output are temporally encoded. Our results demonstrate that applying reference spikes improves the performance of SNNs by enhancing their temporal information processing ability.

  • Research Article
  • 10.1609/aaai.v39i28.35309
When to Learn and When to Stop: Quitting at the Optimal Time (Student Abstract)
  • Apr 11, 2025
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Diana Vins + 2 more

Artificial neural networks (ANNs) struggle with continual learning, sacrificing performance on previously learned tasks to acquire new task knowledge. Here we propose a new approach allowing to mitigate catastrophic forgetting during continuous task learning. Typically a new task is trained until it reaches maximal performance, causing complete catastrophic forgetting of the previous tasks. In our new approach, termed Optimal Stopping (OS), network training on each new task continues only while the mean validation accuracy across all the tasks (current and previous) increases. The stopping criterion creates an explicit balance: lower performance on new tasks is accepted in exchange for preserving knowledge of previous tasks, resulting in higher overall network performance. The overall performance is further improved when OS is combined with Sleep Replay Consolidation (SRC), wherein the network converts to a Spiking Neural Network (SNN) and undergoes unsupervised learning modulated by Hebbian plasticity. During the SRC, the network spontaneously replays activation patterns from previous tasks, helping to maintain and restore prior task performance. This combined approach offers a promising avenue for enhancing the robustness and longevity of learned representations in continual learning models, achieving over twice the mean accuracy of baseline continuous learning while maintaining stable performance across tasks.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 565
  • 10.3389/fnins.2018.00774
Deep Learning With Spiking Neurons: Opportunities and Challenges.
  • Oct 25, 2018
  • Frontiers in Neuroscience
  • Michael Pfeiffer + 1 more

Spiking neural networks (SNNs) are inspired by information processing in biology, where sparse and asynchronous binary signals are communicated and processed in a massively parallel fashion. SNNs on neuromorphic hardware exhibit favorable properties such as low power consumption, fast inference, and event-driven information processing. This makes them interesting candidates for the efficient implementation of deep neural networks, the method of choice for many machine learning tasks. In this review, we address the opportunities that deep spiking networks offer and investigate in detail the challenges associated with training SNNs in a way that makes them competitive with conventional deep learning, but simultaneously allows for efficient mapping to hardware. A wide range of training methods for SNNs is presented, ranging from the conversion of conventional deep networks into SNNs, constrained training before conversion, spiking variants of backpropagation, and biologically motivated variants of STDP. The goal of our review is to define a categorization of SNN training methods, and summarize their advantages and drawbacks. We further discuss relationships between SNNs and binary networks, which are becoming popular for efficient digital hardware implementation. Neuromorphic hardware platforms have great potential to enable deep spiking networks in real-world applications. We compare the suitability of various neuromorphic systems that have been developed over the past years, and investigate potential use cases. Neuromorphic approaches and conventional machine learning should not be considered simply two solutions to the same classes of problems, instead it is possible to identify and exploit their task-specific advantages. Deep SNNs offer great opportunities to work with new types of event-based sensors, exploit temporal codes and local on-chip learning, and we have so far just scratched the surface of realizing these advantages in practical applications.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.neunet.2024.107037
Similarity-based context aware continual learning for spiking neural networks.
  • Apr 1, 2025
  • Neural networks : the official journal of the International Neural Network Society
  • Bing Han + 5 more

Similarity-based context aware continual learning for spiking neural networks.

  • Research Article
  • Cite Count Icon 7
  • 10.1016/j.array.2023.100323
Advancements in spiking neural network communication and synchronization techniques for event-driven neuromorphic systems
  • Oct 5, 2023
  • Array
  • Mahyar Shahsavari + 4 more

Neuromorphic event-driven systems emulate the computational mechanisms of the brain through the utilization of spiking neural networks (SNN). Neuromorphic systems serve two primary application domains: simulating neural information processing in neuroscience and acting as accelerators for cognitive computing in engineering applications. A distinguishing characteristic of neuromorphic systems is their asynchronous or event-driven nature, but even event-driven systems require some synchronous time management of the neuron populations to guarantee sufficient time for the proper delivery of spiking messages. In this study, we assess three distinct algorithms proposed for adding a synchronization capability to asynchronous event-driven compute systems. We run these algorithms on POETS (Partially Ordered Event-Triggered Systems), a custom-built FPGA-based hardware platform, as a neuromorphic architecture. This study presents the simulation speed of SNNs of various sizes. We explore essential aspects of event-driven neuromorphic system design that contribute to efficient computation and communication. These aspects include varying degrees of connectivity, routing methods, mapping techniques onto hardware components, and firing rates. The hardware mapping and simulation of up to eight million neurons, where each neuron is connected to up to one thousand other neurons, are presented in this work using 3072 reconfigurable processing cores, each of which has 16 hardware threads. Using the best synchronization and communication methods, our architecture design demonstrates 20-fold and 16-fold speedups over the Brian simulator and one 48-chip SpiNNaker node, respectively. We conclude with a brief comparison between our platform and existing large-scale neuromorphic systems in terms of synchronization, routing, and communication methods, to guide the development of future event-driven neuromorphic systems.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon