The continuous advancement of autonomous driving technology imposes higher demands on the accuracy of target detection in complex environments, particularly when traffic targets are occluded. Existing algorithms still face significant challenges in detection accuracy and real-time performance under such conditions. To address this issue, this paper proposes an improved YOLOX algorithm based on adaptive deformable convolution, named OCC-YOLOX. This algorithm enhances the feature extraction network’s ability to focus on occluded targets by incorporating a coordinate attention mechanism. Additionally, it introduces the Overlapping IoU (OL-IoU) loss function to optimize the overlap between predicted and ground truth bounding boxes, thereby improving detection accuracy. Furthermore, the adoption of Fast Spatial Pyramid Pooling (Fast SPP) reduces computational complexity while maintaining real-time performance. Experiments on fused public datasets demonstrate that OCC-YOLOX achieves improvements in accuracy, recall, and average precision by 2.76%, 1.25%, and 1.92%, respectively. In addition to testing on the KITTI, CityPersons, and BDD100K datasets, the effectiveness of the OCC-YOLOX algorithm is further validated through comparisons with self-collected occlusion scene data. The experimental results indicate that OCC-YOLOX outperforms existing mainstream detection algorithms, particularly in handling complex occlusion scenarios, significantly enhancing the accuracy and efficiency of object detection. This study provides new insights for addressing the challenges of occluded target detection in intelligent transportation systems.
Read full abstract