Abstract

For facing of the problems caused by the YOLOv4 algorithm’s insensitivity to small objects and low detection precision in traffic light detection and recognition, the Improved YOLOv4 algorithm is investigated in the paper using the shallow feature enhancement mechanism and the bounding box uncertainty prediction mechanism. The shallow feature enhancement mechanism is used to extract features from the network and improve the network’s ability to locate small objects and color resolution by merging two shallow features at different stages with the high-level semantic features obtained after two rounds of upsampling. Uncertainty is introduced in the bounding box prediction mechanism to improve the reliability of the prediction of the bounding box by modeling the output coordinates of the prediction bounding box and adding the Gaussian model to calculate the uncertainty of the coordinate information. The LISA traffic light data set is used to perform detection and recognition experiments separately. The Improved YOLOv4 algorithm is shown to have a high effectiveness in enhancing the detection and recognition precision of traffic lights. In the detection experiment, the area under the PR curve value of the Improved YOLOv4 algorithm is found to be 97.58%, which represents an increase of 7.09% in comparison to the 90.49% score gained in the Vision for Intelligent Vehicles and Applications Challenge Competition. In the recognition experiment, the mean average precision of the Improved YOLOv4 algorithm is 82.15%, which is 2.86% higher than that of the original YOLOv4 algorithm. The Improved YOLOv4 algorithm shows remarkable advantages as a robust and practical method for use in the real-time detection and recognition of traffic signal lights.

Highlights

  • The Principle of YOLOv4 AlgorithmThe YOLOv4 algorithm divides the network input into S × S grid units; each grid unit predicts B bounding boxes, the bounding box confidence, and C category probabilities

  • For facing of the problems caused by the YOLOv4 algorithm’s insensitivity to small objects and low detection precision in traffic light detection and recognition, the Improved YOLOv4 algorithm is investigated in the paper using the shallow feature enhancement mechanism and the bounding box uncertainty prediction mechanism

  • In order to verify the performance of the Improved YOLOv4 algorithm for traffic signal detection, experiments were carried out using the LISA traffic light data set of the Intelligent and Safe Automobile Laboratory of the University of California, San Diego [21]

Read more

Summary

The Principle of YOLOv4 Algorithm

The YOLOv4 algorithm divides the network input into S × S grid units; each grid unit predicts B bounding boxes, the bounding box confidence, and C category probabilities. The accuracy is expressed as the intersection over union (IOU) of the predicted bounding box and the real bounding box according to Equation (1). Where confidence is the confidence of the bounding box and Pr(object) is the probability of the object being detected in the grid. The input image is divided into 19 × 19 grid units. The width and height of the entire image are widthimg and heightimg , respectively, which are divided into s × s grid units. The width and height of the bounding box are widthbox and heightbox , respectively. (1) Boundary box width and height normalization according to Equations (2) and (3), respectively. (2) Center point coordinate normalization according to Equations (4) and (5)

CSPDarknet-53 Feature Extraction Network
YOLOv4 Algorithm Loss Function
YOLOv4 Algorithm Network Improvement
YOLOv4 Algorithm Network Structure Improvement
Uncertainty Prediction of Bounding Box
Performance Analysis of Improved YOLOv4 Algorithm for Small Target Detection
Experimental Platform and Data
YOLOv4 Algorithm Anchor Parameter Calculation
Model Training Analysis
Analysis of Traffic Lights Detection Performance
Analysis of Traffic Lights Recognition Performance
Findings
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.