Abstract

Deep Neural Networks (DNNs) are extensively deployed in today’s safety-critical autonomous systems thanks to their excellent performance. However, they are known to make mistakes unpredictably, e.g., a DNN may misclassify an object if it is used for perception, or issue unsafe control commands if it is used for planning and control. One common cause for such unpredictable mistakes is Out-of-Distribution (OOD) input samples, i.e., samples that fall outside of the distribution of the training dataset. We present a framework for OOD detection based on outlier detection in one or more hidden layers of a DNN with a runtime monitor based on either Isolation Forest (IF) or Local Outlier Factor (LOF). Performance evaluation indicates that LOF is a promising method in terms of both the Machine Learning metrics of precision, recall, F1 score and accuracy, as well as computational efficiency during testing.

Highlights

  • Machine Learning (ML), especially Deep Learning (DL) based on Deep Neural Networks (DNNs), has achieved tremendous success in many application domains

  • DNNs are known to be quite brittle to input variations, and may make mistakes in an unpredictable manner, e.g., a well-trained DNN with high accuracy may unpredictably misclassify an object if it is used for perception, or issue unsafe control commands if it is used for planning and control

  • We present a framework for OOD detection for DNNs based on two outlier detection algorithms, Isolation Forest (IF) and Local Outlier Factor (LOF)1

Read more

Summary

INTRODUCTION

Machine Learning (ML), especially Deep Learning (DL) based on Deep Neural Networks (DNNs), has achieved tremendous success in many application domains. The main drawback of this method is its high runtime overhead in terms of both CPU cycles and memory size, which can grow to tens of GB even for relatively-small DNNs. Henzinger et al [21] proposed the box abstractionbased monitoring by performing k-means clustering of activations in one or more hidden layers for each class during training, and constructing box abstractions for each combination of class and cluster that encodes the lower and upper bounds of all the dimensions of the activation values. We propose to monitor one or more hidden layers of a DNN with two outlier/anomaly detection methods: Isolation Forest (IF) [24] and Local Outlier Factor (LOF) [25] [26] Both are generic and popular techniques for anomaly detection [27], but they are typically applied to the input data samples directly, while we apply them to neuron activations of the hidden layer(s) of a DNN for OOD detection of input data samples. The BDD-based method [20] does not have this flexibility

OUR APPROACH
PERFORMANCE EVALUATION
EXPERIMENTAL RESULTS We choose three comparison baselines
Findings
Computational Efficiency
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call