Abstract

Engineering vehicles play a vital role in supporting construction projects. However, due to their substantial size, heavy tonnage, and significant blind spots while in motion, they present a potential threat to road maintenance, pedestrian safety, and the well-being of other vehicles. Hence, monitoring engineering vehicles holds considerable importance. This paper introduces an engineering vehicle detection model based on improved YOLOv6. First, a Swin Transformer is employed for feature extraction, capturing comprehensive image features to improve the detection capability of incomplete objects. Subsequently, the SimMIM self-supervised training paradigm is implemented to address challenges related to insufficient data and high labeling costs. Experimental results demonstrate the model’s superior performance, with a mAP50:95 value of 88.5% and mAP50 value of 95.9% on the dataset of four types of engineering vehicles, surpassing existing mainstream models and proving its effectiveness in engineering vehicle detection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.