Abstract

We propose an approach to predict accuracy for three-dimensional reconstruction and camera pose using a generic RGB-D camera on a robotic platform. We initially create a ground truth of 3D points and camera poses using a set of smart markers that we specifically devised and constructed for our approach. Then, we compute actual errors and their accuracy during the motion of our mobile robotic platform. The error modeling is then provided, which is used as input to a deep multi-layer perceptron in order to estimate accuracy as a function of the camera’s distance, velocity, and vibration of the vision system. The network outputs are the root mean squared errors for the 3D reconstruction and the relative pose errors for the camera. Experimental results show that this approach has a prediction accuracy of ±1% for the 3D reconstruction and ±2.5% for camera poses, which shows a better performance in comparison with state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.