Abstract

With the development of autopilot, the performance of intelligent vehicles is constrained by their inability to perceive blind and beyond visual range areas. Vehicle-to-infrastructure cooperative perception has become an effective method for achieving reliable and higher-level autonomous driving. A vehicle-to-infrastructure cooperative beyond visual range and non-blind area method, based on heterogeneous sensors, was proposed in this study. Firstly, a feature map receptive field enhancement module with spatial dilated convolution module (SDCM), based on spatial dilated convolution, was proposed and embedded into the YOLOv4 algorithm. The YOLOv4-SDCM algorithm with SDCM module achieved a 1.65% mAP improvement in multi-object detection performance with the BDD100K test set. Moreover, the backbone of CenterPoint was improved with the addition of self-calibrated convolutions, coordinate attention, and residual structure. The proposed Centerpoint-FE (Feature Enhancement) algorithm achieved a 3.25% improvement in mAP with the ONCE data set. In this paper, a multi-object post-fusion matching method of heterogeneous sensors was designed to realize the vehicle-to-infrastructure cooperative beyond visual range. Experiments conducted at urban intersections without traffic lights demonstrated that the proposed method effectively resolved the problem of beyond visual range perception of intelligent vehicles.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call