Abstract

The recent advance in 3D measurement technology, especially 3D laser scanners and RGB-D sensors like Microsoft Kinect, has made 3D point clouds feasibly accessible on mobile robots. Together with efficient SLAM algorithms, it is now possible to generate 3D point clouds of large environments like whole buildings or even cities at high speed and low cost. The problem is that these point clouds are usually not a suitable representation for classic robotic tasks like localization or even more sophisticated problems like scene interpretation. This thesis presents methods to create polygonal environment representations that can be used for semantic mapping and object recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call