Abstract

We describe a new toolset for the manipulation and analysis of ray clouds (3D maps defined by a set of rays from a moving lidar to the scanned surfaces). Unlike point clouds, ray clouds contain information on free-space (air) as well as surface geometry. This allows the toolset to perform volumetric functions and analysis that cannot be done on point clouds alone. The presented toolset consists of seventeen command-line functions, with a C++ library available for those who require more control or tight integration. Our aim is that RayCloudTools is as useful and simple as possible, and we use this paper to demonstrate its utility, and to assess its ease of use, with comparison to established cloud processing libraries.

Highlights

  • Robotic perception has come a long way with the advent of spinning lidar, depth cameras and computer vision systems for generating 3D maps directly from the robot

  • The aim of this library is to provide a set of building-blocks that are useful to as many users as possible. We define this usefulness by five criteria, which we demonstrate within the main sections of the paper: 1) it performs functions that cannot be performed on just point clouds - Section VI

  • We are making the assumption that the clouds are already vertically aligned. We have found this to be a fair assumption for mapping sensors that contain an Inertial Measurement Unit (IMU)

Read more

Summary

Introduction

Robotic perception has come a long way with the advent of spinning lidar, depth cameras and computer vision systems for generating 3D maps directly from the robot These maps are typically point clouds, which contain a 3D location per point and possible additional data such as colour. Occupancy gridmaps [9] provide a way to approximate this volumetric information They voxelise a scanned area and discretise the ray information into per-voxel statistics representing attributes such as occupancy [9], partial occupancy [10], and surface covariance [11], [12]. They are valuable in real-time scene perception and are common in robotic navigation. A vector format representing the exact sensor observations is better suited to the analysis

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.