Abstract

2D laser range finders have been widely used in mobile robot navigation. However, their use is limited to simple environments containing objects of regular geometry and shapes. Stereo vision, instead, provides 3D structural data of complex objects. In this paper, measurements from a stereo vision camera system and a 2D laser range finder are fused to dynamically plan and navigate a mobile robot in cluttered and complex environments. A robust estimator is used to detect obstacles and ground plane in 3D world model in front of the robot based on disparity information from stereo vision system. Based on this 3D world model, 2D cost map is generated. A separate 2D cost map is also generated by 2D laser range finder. Then we use a grid-based occupancy map approach to fuse the complementary information provided by the 2D laser range finder and stereo vision system. Since the two sensors may detect different parts of an object, two different fusion strategies are addressed here. The final occupancy grid map is simultaneously used for obstacle avoidance and path planning. Experimental results obtained form a Point Grey's Bumblebee stereo camera and a SICK LDOEM laser range finder mounted on a Packbot robot are provided to demonstrate the effectiveness of the proposed lidar and stereo vision fusion strategy for mobile robot navigation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call