Abstract

The task of 3D mapping indoor environments in Search and Rescue missions can be very useful on providing detailed spacial informarion to human teams. This can be accomplished using field robots, equipped with sensors capable of obtaining depth and color data, such as the one provided by the Kinect sensor. Several methods have been proposed in the literature to address the problem of automatic 3D reconstruction from depth data. Most methods rely on the minimization of the matching error among individual depth frames. However, ambiguity in sensor data often leads to erroneous matching (due to local minima), hard to cope with in a purely automatic approach. This paper is targeted to 3D reconstruction from RGB-D data, and proposes a semi-automatic approach, denoted Interactive Mapping, involving a human operator in the process of detecting and correcting erroneous matches. Instead of allowing the operator complete freedom in correcting the matching in a frame by frame basis, the proposed method constrains human intervention along the degrees of freedom with most uncertainty. The user is able to translate and rotate individual RGB-D point clouds, with the help of a force field-like reaction to the movement of each point cloud. A dataset was obtained and used using a kinect equipped on the tracked wheel robot RAPOSA-NG, developed for Search and Rescue missions. Some preliminary results are presented, illustrating the advantages of the method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call