Abstract

To solve the feature loss caused by the compression of high-resolution images during the normalization stage, an adaptive clipping algorithm based on the You Only Look Once (YOLO) object detection algorithm is proposed for the data preprocessing and detection stage. First, a high-resolution training dataset is augmented with the adaptive clipping algorithm. Then, a new training set is generated to retain the detailed features that the object detection network needs to learn. During the network detection process, the image is detected in chunks via the adaptive clipping algorithm, and the coordinates of the detection results are merged by position mapping. Finally, the chunked detection results are collocated with the global detection results and outputted. The improved YOLO algorithm is used to conduct experiments comparing this algorithm with the original algorithm for the detection of test set vehicles. The experimental results show that compared with the original YOLO object detection algorithm, the precision of our algorithm is increased from 79.5% to 91.9%, the recall is increased from 44.2% to 82.5%, and the mAP@0.5 is increased from 47.9% to 89.6%. The application of the adaptive clipping algorithm in the vehicle detection process effectively improves the performance of the traditional object detection algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call