Abstract

Object detection is a safety-critical aspect of autonomous driving, allowing vehicles to identify moving objects in the scene for tracking, prediction and decision making. Current detectors, however, tend to provide point estimates for detected objects, which lack information on the variability of the prediction and how well it fits the model that produced the prediction. Proper uncertainty estimation can be incorporated into traditional object detection pipelines to produce a measure of uncertainty alongside traditional point estimate object predictions. In this work, uncertainty estimates are implemented for LiDAR and camera object detectors using Bayesian theory, and the resulting output distributions are assessed using signal detection theory to generate an uncertainty based classifier that can evaluate its own performance. The classifier can be used to track the ratio of false positive to true positive detections, defined as the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">anomalous detections ratio</i> . Findings from this work indicate that this novel metric is responsive to degraded driving conditions including night time driving and lens obstructions for the RGB camera, while in LiDAR data, the metric is responsive to snowfall and simulated rain conditions. These results are focused on the classification and regression of vehicle objects, making use of the sizeable ground-truth sets for vehicles that are provided in publicly-available autonomous driving data sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call