Abstract

Although vision-guided robotic picking systems are commonly used in factory environments, achieving rapid changeover for diverse workpiece types can still be challenging because the manual redefinition of vision software and tedious collection and annotation of datasets consistently hinder the automation process. In this paper, we present a novel approach for rapid workpiece changeover in a vision-guided robotic picking system using the proposed RoboTwin and FOVision systems. The RoboTwin system offers a realistic metaverse scene that enables tuning robot movements and gripper reactions. Additionally, it automatically generates annotated virtual images for each workpiece’s pickable point. These images serve as training datasets for an AI model and are deployed to the FOVision system, a platform that includes vision and edge computing capabilities for the robotic manipulator. The system achieves an instance segmentation mean average precision of 70% and a picking success rate of over 80% in real-world detection scenarios. The proposed approach can accelerate dataset generation by 80 times compared with manual annotation, which helps to reduce simulation-to-real gap errors and enables rapid line changeover within flexible manufacturing systems in factories.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call