Abstract

This article is concerned with 2D indoor environment mapping produced by a sensor data fusion algorithm. A measurement subsystem assembled on a mobile robot uses its sensors to gather data from its surroundings. In this article, an algorithm is proposed and evaluated to fuse data acquired by the following low-cost sensors: a visual sensor (a wireless webcam with a laser pointer) and three range finder sensors (two infrared units and a sonar transducer). The main steps of the proposed solution are: (a) at each accurate robot pose an occupancy grid (OG) probabilistic map is generated for each type of sensor, (b) the three OG maps are merged using competitive fusion and the RANSAC algorithm is employed to extract line segments, (c) the line segments are further processed using competitive and/or complementary fusion resulting in a feature-based map for each robot pose. The previous steps are repeated until all robots pose are computed. The feature-based maps for all robot poses are then merged using competitive fusion, and a final precise OG map of the environment is generated. The proposed algorithm was evaluated using data gathered by the SLAMVITA robot sensors in two small test environments. The experimental results have shown that the proposed algorithm was able to build precise maps of the test environments (error less than 1.5 %).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.