Current autonomous driving systems predominantly focus on 3D object perception from the vehicle’s perspective. However, the single-camera 3D object detection algorithm in the roadside monitoring scenario provides stereo perception of traffic objects, offering more accurate collection and analysis of traffic information to ensure reliable support for urban traffic safety. In this paper, we propose the YOLOv7-3D algorithm specifically designed for single-camera 3D object detection from a roadside viewpoint. Our approach utilizes various information, including 2D bounding boxes, projected corner keypoints, and offset vectors relative to the center of the 2D bounding boxes, to enhance the accuracy of 3D object bounding box detection. Additionally, we introduce a 5-layer feature pyramid network (FPN) structure and a multi-scale spatial attention mechanism to improve feature saliency for objects of different scales, thereby enhancing the detection accuracy of the network. Experimental results demonstrate that our YOLOv7-3D network achieved significantly higher detection accuracy on the Rope3D dataset while reducing computational complexity by 60%.
Read full abstract