Abstract

Abstract Obstacle detection, avoidance and path finding for autonomous vehicles requires precise information of the vehicle’s environment for accurate navigation and decision making. As such vision and depth perception sensors have become an increasingly integral part of autonomous vehicle research and development work. The advancements made in vision sensors such as radars, Light Detection And Ranging (LIDAR) sensors and compact high resolution cameras is encouraging, however individual sensors can be prone to error/misinformation due to environmental factors such as scene illumination, object reflectivity, and object transparency. The application of sensor fusion in a given system, by the utilization of multiple sensors perceiving similar or relatable information over a network, is implemented to provide a more robust and complete system information and minimize the overall perceived error of the system. The primary objective of this work is to implement a smart and robust sensor fusion system using 2D LIDAR and a stereo depth camera to capture depth and color information of an environment. The depth points generated by the LIDAR are fused with the depth map generated by the stereo camera by a Fuzzy system that implements smart fusion and corrects any gaps in the depth information of the stereo camera. The results show that the output of the proposed fuzzy fusion algorithm provides a higher depth confidence than each of the individual sensors can provide.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call