Object detection is becoming increasingly critical in autonomous driving. However, the accuracy and effectiveness of object detectors are often constrained by the obscuration of object features and details in adverse weather conditions. Therefore, this paper presented the DAN-YOLO vehicle object detector specifically designed for driving conditions in adverse weather. Building on the YOLOv7-Tiny network, SPP was replaced with SPPF, resulting in the SPPFCSPC structure, which enhances processing speed. The concept of Hybrid Dilated Convolution (HDC) was also introduced to improve the SPPFCSPC and ELAN-T structures, expanding the network’s receptive field (RF) while maintaining a lightweight design. Furthermore, an efficient multi-scale attention (EMA) mechanism was introduced to enhance the effectiveness of feature fusion. Finally, the Wise-IoUv1 loss function was employed as a replacement for CIoU to enhance the localization accuracy of the bounding box (bbox) and the convergence speed of the model. With an input size of 640 × 640, the DAN-YOLO algorithm proposed in this study achieved an increase in mAP0.5 values of 3.4% and 6.3% compared to the YOLOv7-Tiny algorithm in the BDD100K and DAWN benchmark tests, respectively, while achieving real-time detection (142.86 FPS). When compared with other state-of-the-art detectors, it reports better trade-off in terms of detection accuracy and speed under adverse driving conditions, indicating the suitability for autonomous driving applications.
Read full abstract