Abstract

Sensors are widely used to construct a 3D model of parts by collecting data. The accuracy of the collected data depends on sensor placement with respect to the part and sensor operational range. The operational range and limitations of the sensor need to be applied as constraints while planning for robot motions that move the sensor around the part to collect data. Overly conservative constraints on sensor placement will lead to high execution times. We present an approach where we develop a robot motion planning algorithm that takes into account the camera performance constraints and produces output with low error. An RGB-D camera is used to obtain the pointcloud of the part. An offline planning method improves point density in the regions having zero or low density. Our method guarantees a high point density across the surface of the part. Results are presented on six geometries with different complexity and surface properties. We also present results on how camera parameters influence the output of our method. Our results show that algorithmic advances reported in this paper enable us to use low-cost depth cameras for producing high accuracy uniform density scans of physical objects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call