Abstract

<italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Perception</i> in automated vehicles (AV) is the main factor in achieving safe driving. In this perception task, multi-object detection (MOD) in diverse driving situations is the main challenge. Our recent survey [Ravindran <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">et al.</i> (2021)] shows the limitations of deep neural networks (DNN) in predicting the uncertainties of object detection in MOD. This research proposed a camera, LiDAR and RADAR sensor fusion Bayesian neural network (CLR-BNN) to improve detection accuracy and reduce uncertainties in diverse driving situations using these three primary sensing devices. The experiments were performed using the nuScence dataset with incorporation of various noises. The CLR-BNN performed better than its deterministic sensor fusion model (CLR-DNN) in terms of mAP. The CLR-BNN also showed improvement in categorical and bounding box location uncertainty using sensor fusion in diverse driving conditions. The uncertainty predictions of the CLR-BNN were validated using the calibration curve and other performance metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call