Abstract

Robotic collision detection is a complex task that requires both real time data acquisition and important features extraction from a captured image. In order to accomplish this task, the algorithms used need to be fast to process the captured data and perform real time decisions. Real-time collision detection in dynamic scenarios is a hard task if the algorithms used are based on conventional techniques of computer vision, since these arecomputationally complex and, consequently, time-consuming, specially if we consider small robotic devices with limited computational resources. On the other hand, neurorobotic models may provide a foundation for the development of more effective and autonomous robots, based on an improved understanding at the biological basis of adaptive behavior. Particularly, our approach must be inspired in simple neural systems, which only requires a small amount of neural hardware to perfom complex behaviours and, consequently, becomes easier to understand all the mechanism behind these behaviours. By this reason, flying insects are particularly attractive as sources of inspiration due to the complexity and efficiency of the behaviours allied with the simplicity of a reduced neural system. The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Locust optic lobe. It responds selectively to looming objects and can trigger avoidance reactions when a rapidly approaching object is detected. Based on the relatively simple encoding strategy of the LGMD neuron, different bio-inspired neural networks for collision avoidance were developed. In the work presented in this chapter, we propose a new LGMD model based on two previous models, in order to improve over them by incorporating other features. To accomplish this goal, we proceed as follows: (1) we critically analyse different LGMD models proposed in literature; (2) we highlight the convergence or divergence in the results obtained with each of the models; (3) we merge the advantages/disadvantages of each model into a new one. In order to assess the real-time properties of the proposed model, it was applied to a real robot. The obtained results have shown the high capability and robustness of the LGMD model to prevent collisions in complex visual scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.