Abstract

The assisted assembly of customized products supported by collaborative robots combined with mixed reality devices is the current trend in the Industry 4.0 concept. This article introduces an experimental work cell with the implementation of the assisted assembly process for customized cam switches as a case study. The research is aimed to design a methodology for this complex task with full digitalization and transformation data to digital twin models from all vision systems. Recognition of position and orientation of assembled parts during manual assembly are marked and checked by convolutional neural network (CNN) model. Training of CNN was based on a new approach using virtual training samples with single shot detection and instance segmentation. The trained CNN model was transferred to an embedded artificial processing unit with a high-resolution camera sensor. The embedded device redistributes data with parts detected position and orientation into mixed reality devices and collaborative robot. This approach to assisted assembly using mixed reality, collaborative robot, vision systems, and CNN models can significantly decrease assembly and training time in real production.

Highlights

  • Introduction and Related WorksCollaborative robots and their implementation in the assisted assembly process is an important part of the Industry 4.0 concept

  • An initial experiment of the cam switch parts recognition was executed using a small set of training samples with the different floor

  • The training process for Inception V2 is shown in Figure 13, where the unit on the X-axis is the number of cycles and the unit on the Y-axis is mAP

Read more

Summary

Introduction and Related Works

Collaborative robots and their implementation in the assisted assembly process is an important part of the Industry 4.0 concept They can work in the same workspace as human workers and perform basic manipulations or simple monotonous assembly tasks. The important condition for the assisted assembly process is the synchronization of augmented (AR), virtual (VR), or mixed (MR) reality devices with the digital twin for full digitalization of the used technology. The main novelty and the innovation contribution of the article is a complex methodology for CNN training by virtual 3D models and design of a communication framework for assisted assembly devices like collaborative robot and mixed reality device

Methodology of Deep Learning Implementation into the Assisted Assembly
Experimental Platform
Input Data Preparation for CNN Training
An Automated
An example project in Engine: the Unreal
An Automated Annotation by the OpenCV Algorithms
The Generated Training and Testing Sample Set
Experimental Results and Implementation into the Assembly Process
14. Inference
17. Images
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call