Abstract

In the task of using robots to capture end-of-life cars, the position of the vehicle door frame needs to be grasped. A fast and accurate positioning of the vehicle door frame is the key to realizing the automatic car grasping process. Traditional methods for locating and grasping scrap cars rely heavily on manual operations and suffer from low grasping efficiency and poor accuracy. Therefore, this paper proposes a binocular vision robot vehicle door frame spatial localization method based on the improved YOLOv4. This method includes a lightweight and efficient feature fusion target detection network in complex environments, and the target detection results are combined with an enhanced SURF feature–matching method to locate the vehicle door frame position. To simplify the network structure, MobileNetv3 is used instead of the backbone network CSPDarknet53, and deep separable convolution is used in the network. To increase the sensitivity of the network to vehicle door frame targets in complex environments, an improved convolutional block attention module is added to the pyramid attention with simple network backbones. Moreover, adaptive spatial feature fusion is introduced into the network to fully use the features at different scales for more effective feature fusion. Compared with YOLOv4, the number of network parameters is reduced by 73.8%, the mAP is improved by 1.35%, and the detection speed is increased by 28.7%. The experimental results demonstrate that the positioning accuracy of the system is 0.745 mm, which meets the positioning measurement error of less than 1 cm required for the vehicle door frame. The paper also compares our findings with other network models. The results show that the method achieves a good balance between detection speed and detection accuracy, satisfying the task of identifying vehicle door frames in complex environments with good detection results.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.