Abstract

Different types of 3D sensors, such as LiDAR and RGB-D cameras, capture data with different resolution, range, and noise characteristics. It is often desired to merge these different types of data together into a coherent scene, but automatic alignment algorithms generally assume that the characteristics of each fragment are all similar. Our goal is to evaluate the performance of these algorithms on data with different characteristics to enable the integration of data from multiple types of sensors. We use the Redwood dataset, which has high-resolution scans of several different environments captured using a stationary LiDAR scanner. We first develop a method to emulate the capture of these environments as viewed by different types of sensor by leveraging OpenGL and a mesh creation process. Next, we take fragments of these captures which represent scenarios in which each type of sensor would be used, using our scanning experience to inform the selection process. Finally, we attempt to merge the fragments together using several automatic algorithms and evaluate how the results compare with the original scenes. We evaluate based on transformation similarity to ground truth, algorithm speed and ease of use, and subjective quality assessments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call