Abstract

PurposeThe purpose of this paper is to enhance the performance of robots in peg-in-hole assembly tasks, enabling them to swiftly and robustly accomplish the task. It also focuses on the robot’s ability to generalize across assemblies with different hole sizes.Design/methodology/approachHuman behavior in peg-in-hole assembly serves as inspiration, where individuals visually locate the hole firstly and then continuously adjust the peg pose based on force/torque feedback during the insertion process. This paper proposes a novel framework that integrate visual servo and adjustment based on force/torque feedback, the authors use deep neural network (DNN) and image processing techniques to determine the pose of hole, then an incremental learning approach based on a broad learning system (BLS) is used to simulate human learning ability, the number of adjustments required for insertion process is continuously reduced.FindingsThe author conducted experiments on visual servo, adjustment based on force/torque feedback, and the proposed framework. Visual servo inferred the pixel position and orientation of the target hole in only about 0.12 s, and the robot achieved peg insertion with 1–3 adjustments based on force/torque feedback. The success rate for peg-in-hole assembly using the proposed framework was 100%. These results proved the effectiveness of the proposed framework.Originality/valueThis paper proposes a framework for peg-in-hole assembly that combines visual servo and adjustment based on force/torque feedback. The assembly tasks are accomplished using DNN, image processing and BLS. To the best of the authors’ knowledge, no similar methods were found in other people’s work. Therefore, the authors believe that this work is original.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.