A new algorithm called YOLO-APDM is proposed to address low quality and multi-scale target detection issues in infrared road scenes. The method reconstructs the neck section of the algorithm using the multi-scale attentional feature fusion idea. Based on this reconstruction, the P2 detection layer is established, which optimizes network structure, enhances multi-scale feature fusion performance, and expands the detection network’s capacity for multi-scale complicated targets. Replacing YOLOv8’s C2f module with C2f-DCNv3 increases the network’s ability to focus on the target region while lowering the amount of model parameters. The MSCA mechanism is added after the backbone’s SPPF module to improve the model’s detection performance by directing the network’s detection resources to the major road target detection zone. Experimental results show that on the FLIR_ADAS_v2 dataset retaining eight main categories, using YOLO-APDM compared to YOLOv8n, mAP@0.5 and mAP@0.5:0.95 increased by 6.6% and 5.0%, respectively. On the M3FD dataset, mAP@0.5 and mAP@0.5 increased by 8.1% and 5.9%, respectively. The number of model parameters and model size were reduced by 8.6% and 4.8%, respectively. The design requirements of the high-precision detection of infrared road targets were achieved while considering the requirements of model complexity control.
Read full abstract