Abstract

ABSTRACT In the field of industrial assembly, human-machine interactive assembly methods are frequently used. Lack of virtual and physical mapping, a convoluted guiding system, and low effect precision in the interactive process, a digital twin-driven human-machine interactive assembly method system is proposed as a solution to the mentioned issues. The YOLOv7-tiny lightweight model is used to perform accurate detection of parts. By incorporating attention modules into the backbone network, the feature extraction capability of the model in complicated assembly environments is enhanced. The assembly method proposed is validated using the assembly process of the reducer as an instance. The OpenCV method is employed to produce geometric reference features for parts. The experimental results show that the proposed assembly method can provide visual guidance for the assembly process, improve the traditional list-type assembly component retrieval method, solve the drawbacks of the pre-set assembly guidance in the guidance system that may not be able to adapt to the changes of the assembly results in the actual operation, and can accurately instruct novices how to assemble, which is characterised by easy implementation, low cost and high accuracy, and is of great significance for improving the success rate and assembly efficiency of human-machine interactive assembly.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.