Abstract

Alignment of sensor data, typically acquired from cameras, laser range scanners, or sonar sensors, is the basis for all robot mapping tasks. Recent advances in the development of laser range devices make research on laser range alignment a focus of robot mapping research. In contrast to cameras, laser range scanners offer relatively precise depth information, yet the feature density is relatively sparse. Since alignment algorithms are based on feature correspondence, a lack of features naturally causes problems. One way to approach that problem is to cover the area with a high number of scans, such that subsequent scans have a low relative displacement only. This guarantees sufficient scan overlap and a reliable detection of feature correspondences. Though this approach is feasible for many mapping applications, it can not be assumed for an important field of robotics, namely Urban Search and Rescue Robotics (rescue robots), in there especially the setting of multi robot mapping. In multi robot mapping, a number of robots scan the environment independently, without reliable knowledge of their relative position. Additional sensors, like GPS, can not be assumed due to the nature of the environment. Non autonomous rescue robots were e.g. deployed after the 9/11 attack to assist in the search for victims in the collapsed towers. In such an environment, GPS is not available because of the massive concrete walls surrounding the robots. The task of multi robot mapping in rescue environments imposes especially challenging constraints: • no precise or reliable odometry can be assumed, which means especially that the robots' relative poses are unknown • due to the nature of catastrophe scenarios no distinct landmarks are given • the overlap between pairs of the robots' scans is minimal Figure 1 shows 12 out of 60 single scans from multiple robots, taken in a disaster test area at NIST, Gaithers-burg, MD. Even for humans it is hard to detect overlapping features. Our approach to alignment of such a data set is to first give a rough estimate of the robots' poses, called the pre-alignment, and then to improve the achieved map. This article deals with the second step, the improvement, see figure 2. This article introduces of a new process, called 'Force Field Simulation' (FFS), which is tailored to align maps under the aforementioned constraints. It is motivated by simulation of dynamics of rigid bodies in gravitational fields, but replaces laws of physics with constraints derived from human perception. It is an approach of the family of gradient O pe n A cc es s D at ab as e w w w .in te hw eb .c om

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.