Abstract

The SLAM (Simultaneous Localization and Mapping) technology has been widely exploited to collect information of location and environment for indoor mobile robots. Usually, SLAM has a single LiDAR(Light Detection and Ranging) sensor which reveals its vulnerability to complex terrain or distinction between objects. A possible solution to overcome this problem is the data fusion technique with LiDAR and depth cameras. This paper presents a novel data fusion technique with LiDAR data and 3D-point cloud data for estimating the surrounding object locations. In the proposed technique, the surrounding object location data are extracted using the region-based segmentation technique in real time using 3D-point cloud images. The effectiveness of the proposed algorithm is demonstrated with a set of experiments based on ROS (Robot Operating System).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call