Abstract

Some fundamental visual features have been found to be fully extracted before reaching the cerebral cortex. We focus on direction-selective ganglion cells (DSGCs), which exist at the terminal end of the retinal pathway, at the forefront of the visual system. By utilizing a layered pathway composed of various relevant cells in the early stage of the retina, DSGCs can extract multiple motion directions occurring in the visual field. However, despite a considerable amount of comprehensive research (from cells to structures), a definitive conclusion explaining the specific details of the underlying mechanisms has not been reached. In this paper, leveraging some important conclusions from neuroscience research, we propose a complete quantified model for the retinal motion direction selection pathway and elucidate the global motion direction information acquisition mechanism from DSGCs to the cortex using a simple spiking neural mechanism. This mechanism is referred to as the artificial visual system (AVS). We conduct extensive testing, including one million sets of two-dimensional eight-directional binary object motion instances with 10 different object sizes and random object shapes. We also evaluate AVS’s noise resistance and generalization performance by introducing random static and dynamic noises. Furthermore, to thoroughly validate AVS’s efficiency, we compare its performance with two state-of-the-art deep learning algorithms (LeNet-5 and EfficientNetB0) in all tests. The experimental results demonstrate that due to its highly biomimetic design and characteristics, AVS exhibits outstanding performance in motion direction detection. Additionally, AVS possesses biomimetic computing advantages in terms of hardware implementation, learning difficulty, and parameter quantity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call