Abstract
Feature extraction is a central step of processing Light Detection and Ranging (LIDAR) data. Existing detectors tend to exploit characteristics of specific environments: corners and lines from indoor (rectilinear) environments, and trees from outdoor environments. While these detectors work well in their intended environments, their performance in different environments can be poor. We describe a general purpose feature detector for both 2D and 3D LIDAR data that is applicable to virtually any environment. Our method adapts classic feature detection methods from the image processing literature, specifically the multi-scale Kanade-Tomasi corner detector. The resulting method is capable of identifying highly stable and repeatable features at a variety of spatial scales without knowledge of environment, and produces principled uncertainty estimates and corner descriptors at same time. We present results on both software simulation and standard datasets, including the 2D Victoria Park and Intel Research Center datasets, and the 3D MIT DARPA Urban Challenge dataset.
Highlights
The solution to the Simultaneous Localization and Mapping (SLAM) problem is commonly seen as a “holy grail” for the mobile robotics community because it would provide the means to make a robot
SLAM methods that use features have potential advantages over pose-to-pose methods due to the greater sparsity resulting from having landmarks
While feature-based methods are virtually the only viable approach in vision applications, feature detectors for Light Detection and Ranging (LIDAR) data have generally been designed for specific environments and are of limited general applicability
Summary
The solution to the Simultaneous Localization and Mapping (SLAM) problem is commonly seen as a “holy grail” for the mobile robotics community because it would provide the means to make a robot. It becomes non-trivial to determine whether two features encoded as a set of tuples match Because of these fundamental differences between cameras and LIDARs, there are some challenges if we want to extract features from LIDAR data using extractors from the computer vision field. These occlusions can create false features, such as where a foreground occluder makes it appear that an abrupt shape change occurs on a background object To conquer these challenges, we have chosen to rasterize LIDAR data by projecting the LIDAR points into a 2D Euclidean space. We render the most conservative contours from raw LIDAR data, connect adjacent points that ostensibly belong to the same physical object with straight lines, render lines with a Gaussian kernel low pass filter, draw comet tails for each contour and reject false positives induced by occlusion.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.