Abstract

The advent of Convolutional Neural Networks (CNNs) has led to its increased application in several domains. One noteworthy application is the perception system for autonomous driving that rely on the predictions from CNNs. On one hand, predicting the learned objects with maximum accuracy is of importance. On the other hand, it is still a challenge to evaluate the reliability of CNN-based perception systems without ground truth information. Such evaluations are of significance for autonomous driving applications. One way to estimate reliability is by evaluating robustness of the detections in the presence of artificial perturbations. However, several existing works on perturbation-based robustness quantification rely on the ground truth labels. Acquiring the ground truth labels is a tedious, expensive and error-prone process. In this work we propose a novel label-free robustness metric for quantifying the robustness of CNN object detectors. We quantify the robustness of the detections to a specific type of input perturbation based on the prediction confidences. In short, we check the sensitivity of the predicted confidences under increased levels of artificial perturbation. Thereby, we avoid the need for ground truth annotations. We perform extensive evaluations on our traffic light detector from autonomous driving applications and on public object detection networks and datasets. The evaluations show that our label-free metric is comparable to the ground truth aided robustness scoring.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.