Abstract

A Deep Neural Network (DNN)’s prediction may be unreliable outside of its training distribution despite high levels of accuracy obtained during model training. The DNN may experience different degrees of accuracy degradation for different levels of distribution shifts, hence it is important to predict its performance (accuracy) under distribution shifts. In this paper, we consider the end-to-end approach to autonomous driving of using a DNN to map from an input image to the control action such as the steering angle. For each input image with possible perturbations that cause distribution shifts, we design a Performance Prediction Module to compute its anomaly score, and use it to predict the DNN’s expected prediction error, i.e., its expected deviation from the ground truth (optimal) control action, which is not available after deployment. If the expected prediction error is too large, then the DNN’s prediction may no longer be trusted, and remedial actions should be taken to ensure safety. We consider different methods for computing the anomaly score for the input image, including using the reconstruction error of an Autoencoder, or applying an Anomaly Detection algorithm to a hidden layer of the DNN. We present performance evaluation of the different methods in terms of both prediction accuracy and execution time on different hardware platforms, in order to provide a useful reference for the designer to choose among the different methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call