Abstract

Various sensors are adopted by autonomous driving systems to perceive objects and surroundings. Thus, the multi-sensor data fusion techniques become essential to combine different sensors’ advantages for better perception performance. However, the current multi-sensor data fusion techniques suffer from the high cost of computation resources, low expansibility for more diverse sensors, and insufficient systematic consideration for modeling. This paper first constructs a detachable and expansible multi-sensor data fusion model based on three main modules: front fusion, global fusion, and synthesizer, where the methods for flexible association gating and virtual targets have been designed. The model can be disassembled and configured for different trim levels of vehicles and is easily expansible for adding more heterogeneous sensors. Next, the presented multi-sensor data fusion model is compared with the cheap Joint Probabilistic Data Association (C-JPDA) method. The comparison shows the superior accuracy of the designed model on false association and effective narrowing of the variance of object detection. Finally, the presented multi-sensor data fusion model is integrated into an embedded system and experimented on urban roads and highways with the engaged Level 3 autonomous driving function. The experiment results indicate that the proposed model has excellent sensor data fusion performance and provides accurate and timely object information in the Level 3 autonomous driving system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call