Abstract

Abstract This paper illustrates two approaches for the mobile manipulation of factory robots using deep neural networks. The networks are trained using synthetic datasets unique to the factory environment. Approach I uses depth and red-green-blue (RGB) images of objects for its convolutional neural network (CNN) and Approach II uses computer-aided design models of the objects with RGB images for a deep object pose estimation (DOPE) network and perspective-n-point (PnP) algorithm. Both the approaches are compared based on their complexity, required resources for training, robustness, pose estimation accuracy, and run-time characteristics. Recommendations of which approach is suitable under what circumstances are provided. Finally, the most suitable approach is implemented on a real mobile factory robot in order to execute a series of manipulation tasks and validate the approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call