Abstract
Similar to most visual animals, the crab Neohelice granulata relies predominantly on visual information to escape from predators, to track prey and for selecting mates. It, therefore, needs specialized neurons to process visual information and determine the spatial location of looming objects. In the crab Neohelice granulata, the Monostratified Lobula Giant type1 (MLG1) neurons have been found to manifest looming sensitivity with finely tuned capabilities of encoding spatial location information. MLG1s neuronal ensemble can not only perceive the location of a looming stimulus, but are also thought to be able to influence the direction of movement continuously, for example, escaping from a threatening, looming target in relation to its position. Such specific characteristics make the MLG1s unique compared to normal looming detection neurons in invertebrates which can not localize spatial looming. Modeling the MLG1s ensemble is not only critical for elucidating the mechanisms underlying the functionality of such neural circuits, but also important for developing new autonomous, efficient, directionally reactive collision avoidance systems for robots and vehicles. However, little computational modeling has been done for implementing looming spatial localization analogous to the specific functionality of MLG1s ensemble. To bridge this gap, we propose a model of MLG1s and their pre-synaptic visual neural network to detect the spatial location of looming objects. The model consists of 16 homogeneous sectors arranged in a circular field inspired by the natural arrangement of 16 MLG1s' receptive fields to encode and convey spatial information concerning looming objects with dynamic expanding edges in different locations of the visual field. Responses of the proposed model to systematic real-world visual stimuli match many of the biological characteristics of MLG1 neurons. The systematic experiments demonstrate that our proposed MLG1s model works effectively and robustly to perceive and localize looming information, which could be a promising candidate for intelligent machines interacting within dynamic environments free of collision. This study also sheds light upon a new type of neuromorphic visual sensor strategy that can extract looming objects with locational information in a quick and reliable manner.
Highlights
How to improve collision detection and avoidance remains a critical challenge for self-navigating robots, vehicles, and unmanned aerial vehicles (UAVs)
We have developed a computational model of Monostratified Lobula Giant type1 (MLG1) and their pre-synaptic network to simulate the functionality of spatial localization in crabs with looming sensitive neurons
The most obvious one is that the locust has only one LGMD1 neuron to cover the entire view of the eye, but the crab Neohelice granulata has 16 MLG1 neurons
Summary
How to improve collision detection and avoidance remains a critical challenge for self-navigating robots, vehicles, and unmanned aerial vehicles (UAVs). Evasion strategies might be improved if the mobile machines can obtain and react to the spatial positions of continually approaching objects. Current schemes, such as radar, infra-red, laser, etc., or combinations of these, are acceptable but far from perfection in terms of their reliability, systematic complexity, or energy consumption. Visual-based sensors are more ubiquitous and often accompanied by a compact hardware system. Current vision-based sensors are still not sufficiently reliable to detect imminent collisions in many conditions. A new type of compact and energy efficient vision sensor is required for collision detection in future robots and autonomous vehicles
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have