Abstract

We obtained, with artificial evolution, very small (one or two interneurons, one output neuron) spiking neural networks (SNNs) recognizing a simple temporal pattern in a continuous input stream. The patterns the network evolved to recognize consisted of three different signals. In other words, the task was equivalent to searching in a stream (sequence) of three symbols (say, ABBCACBC.) for a specific subsequence (ABC). The fitness function we used rewarded spiking after the occurrence of the correct pattern (subsequence), and penalized spikes elsewhere. We found out that the networks did not go below two interneurons when they evolved to solve this task with a brief interval of silence between signals. However — surprisingly — for a longer interval of silences between signals the task could be accomplished with just one interneuron. We then analyzed how the spiking networks work by mapping the states of the network onto states of Finite State Machines — a general model of computation on time series. Our long term goal is to understand the mechanisms governing the neural networks that accomplish computational tasks in a way that is robust to noise and damage.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call