Abstract

Biological systems have a great number of visual motion detection neurons, some of which can preferentially react to specific visual regions. Nevertheless, little work has been performed about how they can be used to develop neural network models for omnidirectional collision detection. Hereby, an artificial fly visual brain neural network with presynaptic and postsynaptic subnetworks, for the first time, is developed to detect the changes of visual motion in panoramic scenes. Herein, the presynaptic subnetwork, which originates from the preferential response characteristics of five fly visual neurons, responds to all the moving objects in the panoramic field; the postsynaptic network, which is based on the properties of the angle and height detection neurons in the fly’s brain system, collects the excitatory intensities of the visual neurons, and outputs the real-time activities of the main object closest to the panoramic camera. Hereafter, a computational model is constructed to implement omnidirectional collision detection, relying upon the artificial visual brain neural network and three functional neurons. The theoretical analysis has verified that the collision detection model’s computational complexity depends mainly on the image input resolution. Three experimental conclusions can be clearly drawn: (i) the motion characteristics of the main object in the panoramic environment can be clearly exhibited in the neural network; (ii) the collision detection model can not only outperform the compared models but also successfully perform omnidirectional collision detection; and (iii) it spends 0.24 s or so to execute visual information processing per frame with the resolution of 120 × 80.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call