Abstract

Accurate and fast vehicle detection is a key factor in the field of intelligent transportation. To solve the problems such as low recognition accuracy, slow detection speed, and poor robustness existing in current vehicle detection algorithms. This paper proposes a vehicle detection algorithm based on YOLOv5 and the coordinate attention mechanism (CA) named YOLO-CCS. YOLO-CCS enables the network to focus on the vehicle itself during the feature extraction process, reduces the loss of feature information and improves the effect of vehicle detection. In our algorithm, we firstly expand YOLOv5s with CA blocks at the location of feature extraction section of network backbone and enhance the extraction capabilities of key features, and suppress the interference of complex backgrounds by embedding position information. Secondly, to enable the network to extract rich feature information and improve the feature fusion capability of the network, we introduce the faster implementation of the Cross Stage Partial (CSP) Bottleneck with 2 convolutions (C2f) in the backbone and neck of the network, which has more residual blocks and skip connections, which provides rich feature semantic information, and heightens the feature extraction capabilities of the network. Thirdly, we integrate the SCYLLA-IOU (SIOU) loss function suitable for YOLO-CCS. So we can leverage the vector angle between the real box and the predicted box to further improve the accuracy of the algorithm and accelerate the convergence of the model. Experimental results show that, compared with the baseline model YOLOv5s, the mAP50 and mAP50−95 of our method increase by 3.2% and 1.7% respectively. Compared with other YOLO based models, mAP50 has increased by approximately 4%. And the detecting speed of YOLO-CCS algorithm reaches 48 FPS, which can meet the real-time requirements of vehicle detection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.