Abstract

Processing 3D point cloud data is of primary interest in many areas of computer vision, including object grasping, robot navigation, and object recognition. The introduction of affordable RGB-D sensors has created a great interest in the computer vision community towards developing efficient algorithms for point cloud processing. Previously, capturing a point cloud required expensive specialized sensors such as lasers or dedicated range imaging devices; now, range data is readily available from low-cost sensors that provide easily extractable point clouds from a depth map. From here, an interesting challenge is to find different objects in the point cloud. Various descriptors have been introduced to match features in a point cloud. Cheap sensors are not necessarily designed to produce precise measurements, which means that the data is not as accurate as a point cloud provided from a laser or a dedicated range finder. Although some feature descriptors have been shown to be successful in recognizing objects from point clouds, there still exists opportunities for improvement. The aim of this paper is to introduce techniques from other fields, such as image processing, into 3D point cloud processing in order to improve rendering, classification, and recognition. Covariances have proven to be a success not only in image processing, but in other domains as well. This work develops the application of covariances in conjunction with 3D point cloud data.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.