Abstract
Abstract In recent years, end-to-end neural networks have achieved a dominant position on various open-source autonomous driving datasets, especially in environment perception [1,2]. However, these networks are difficult to deploy due to high computational power requirements. Furthermore, many existing autonomous vehicles use edge computing devices with low computational power and the autonomous driving challenges for such vehicles are largely overlooked. In this paper, we propose an environmental perception system that includes vision-based panoramic perception and late-fusion-based 3D detection. It is suitable for vehicle-grade computing platforms with limited computational resources. Specifically, we first propose DFT-YOLOP, a dual-modal multitask network using visible light and infrared data trained on the BDD100K [3] and M3FD [4] datasets. The dataset tests demonstrate that, compared to numerous baseline networks, DFT-YOLOP offers a substantial improvement in road feature recognition, displays enhanced stability in adverse weather conditions, and delivers superior real-time performance during deployment. Secondly, we propose a late fusion algorithm that exploits the advantages of different sensors, including Vision-LiDAR Fusion Detection and Radar Data Enhancement. Finally, using the Assisted Driving Service Framework (ADSF) provided by the MDC 300F computing platform, we build the Assisted Driving Perception System (ADPS). Experiments in real road scenarios show that under the limited computational resources of MDC 300F, ADPS achieves high perception accuracy and real-time performance, meeting the perception requirements of autonomous driving under medium-speed driving conditions, which is of great significance for the improvement of the perception performance of autonomous driving vehicles using low-cost computational power platforms.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have