Abstract
In this work, a visual object detection and localization workflow integrated into a robotic platform is presented for the 6D pose estimation of objects with challenging characteristics in terms of weak texture, surface properties and symmetries. The workflow is used as part of a module for object pose estimation deployed to a mobile robotic platform that exploits the Robot Operating System (ROS) as middleware. The objects of interest aim to support robot grasping in the context of human-robot collaboration during car door assembly in industrial manufacturing environments. In addition to the special object properties, these environments are inherently characterised by cluttered background and unfavorable illumination conditions. For the purpose of this specific application, two different datasets were collected and annotated for training a learning-based method that extracts the object pose from a single frame. The first dataset was acquired in controlled laboratory conditions and the second in the actual indoor industrial environment. Different models were trained based on the individual datasets and a combination of them were further evaluated in a number of test sequences from the actual industrial environment. The qualitative and quantitative results demonstrate the potential of the presented method in relevant industrial applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.