Abstract

Recognizing and tracking the direction of moving stimuli is crucial to the control of much animal behaviour. In this study, we examine whether a bio-inspired model of synaptic plasticity implemented in a robotic agent may allow the discrimination of motion direction of real-world stimuli. Starting with a well-established model of short-term synaptic plasticity (STP), we develop a microcircuit motif of spiking neurons capable of exhibiting preferential and nonpreferential responses to changes in the direction of an orientation stimulus in motion. While the robotic agent processes sensory inputs, the STP mechanism introduces direction-dependent changes in the synaptic connections of the microcircuit, resulting in a population of units that exhibit a typical cortical response property observed in primary visual cortex (V1), namely, direction selectivity. Visually evoked responses from the model are then compared to those observed in multielectrode recordings from V1 in anesthetized macaque monkeys, while sinusoidal gratings are displayed on a screen. Overall, the model highlights the role of STP as a complementary mechanism in explaining the direction selectivity and applies these insights in a physical robot as a method for validating important response characteristics observed in experimental data from V1, namely, direction selectivity.

Highlights

  • A seemingly effortless task for humans, recognizing and tracking the direction of visual objects is based on an incredible complexity of brain areas involved in visual processing and attention, as well as learning and memory

  • We propose a model of motion discrimination using a ubiquitous mechanism in neuronal circuits, Computational Intelligence and Neuroscience namely, short-term plasticity (STP), whereby the strength of synaptic connections varies from milliseconds to seconds as a result of recent activity [7, 8]. ese rapid changes in synaptic strength vary overtime from one spike to the due to short-term facilitation (STF) and short-term depression (STD) [9]

  • It is important to note that the work presented here does not provide a complete biophysical interpretation of the underlying neural computations observed in the brain. ere are a variety of computational models in the literature that reside at different levels of description, with various levels of biological detail

Read more

Summary

Introduction

A seemingly effortless task for humans, recognizing and tracking the direction of visual objects is based on an incredible complexity of brain areas involved in visual processing and attention, as well as learning and memory. In order to elucidate the circuit mechanisms underlying visual perception, mathematical models have been formulated with strong support from electrophysiological data [1]. Due to their usefulness and their predictive ability in driving new neuroscientific discoveries, brain-inspired ANNs have the potential to be implemented in robotic agents in order to further assess their ecological validity [2]. Given that mechanistic models cannot yet capture the full complexity of the nature of perceptual phenomena, the implementation of well-established models from neuroscience into the domain of artificial intelligence opens new avenues for understanding biological networks exposed to real-world stimuli [3]. Previous approaches in modelling the perceptual phenomena of motion have shown successful attempts in incorporating natural visual inputs in networks of spiking neurons [4,5,6]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call