Abstract

This paper presents a spiking neural network (SNN) model of leaky integrate-and-fire (LIF) neurons for sound recognition, which provides a way to simulate the brain processes. Neural coding and learning by processing external stimulus and recognizing different patterns are important parts in SNN model. Based on features extracted from the time-frequency representation of sound, we present a time-frequency encoding method which can retain the adequate information of original sound and generate spikes from represented features. The generated spikes are further used to train the SNN model with plausible supervised synaptic learning rule to efficiently perform various classification tasks. By testing the encoding and learning methods in RWCP database, experiments demonstrate that the proposed SNN model can achieve the robust performance for sound recognition across a variety of noise conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call