Abstract

Environmental perception is one of the key technologies to realize autonomous vehicles. Autonomous vehicles are often equipped with multiple sensors to form a multi-source environmental perception system. Those sensors are very sensitive to light or background conditions, which will introduce a variety of global and local fault signals that bring great safety risks to autonomous driving system during long-term running. In this paper, a real-time data fusion network with fault diagnosis and fault tolerance mechanism is designed. By introducing prior features to realize the lightweight network, the features of the input data can be extracted in real time. A new sensor reliability evaluation method is proposed by calculating the global and local confidence of sensors. Through the temporal and spatial correlation between sensor data, the sensor redundancy is utilized to diagnose the local and global confidence level of sensor data in real time, eliminate the fault data, and ensure the accuracy and reliability of data fusion. Experiments show that the network achieves state-of-the-art results in speed and accuracy, and can accurately detect the location of the target when some sensors are out of focus or out of order. The fusion framework proposed in this paper is proved to be effective for intelligent vehicles in terms of real-time performance and reliability.

Highlights

  • Road object detection is one of the core technologies of autonomous vehicles

  • To ensure the real-time performance of object detection when processing large-scale multimodal data, this paper proposes a lightweight design for the YOLO V3 network, which greatly reduces the load on the system calculation and storage unit

  • When designing a network framework, the size of feature maps is reduced by introducing priori features, thereby the computing power required by the network is greatly reduced

Read more

Summary

Introduction

Road object detection is one of the core technologies of autonomous vehicles. It provides real-time information on road elements such as surrounding vehicles and pedestrians for autonomous vehicles in real time. The data fusion method can be divided into pre-fusion method and post-fusion method according to the locations where fusion occurs The former fuses the sensor data in the original input layer, and design the object detection network to process on the fusion data [15,16,17,18]. (1) Compared with the previous object detection network, we use a more lightweight feature pyramid network (FPN) [25] structure to ensure the realtime performance of the data fusion system when processing large-scale multi-modal data. (2) The proposed FDA mechanism in the data fusion framework guarantees the elimination of the sensor fault signal in real time, and the accuracy and reliability of the detection results. The subsequent residual structures are brought into Eq (1) for calculation and addition, and the total computing power of the network FE layer is 0.686 × ­109 FLOPs, compared to the 18.569 × ­109 FLOPs computing power required by DarkNet-53, the network computing power required in this paper is its 1/27

Evaluation of detection results
Evaluation of Network Performance
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call