Abstract

Deep neural networks (DNNs) are effective tools for learning-enabled cyber-physical systems (CPSs) that handle high-dimensional image data. However, DNNs may make incorrect decisions when presented with inputs outside the distribution of their training data. These inputs can compromise the safety of CPSs. So, it becomes crucial to detect inputs as out-of-distribution (OOD) and interpret the reasons for their classification as OOD. In this study, we propose an interpretable learning method to detect OOD caused by meteorological features like darkness, lightness, and rain. To achieve this, we employ a variational autoencoder (VAE) to map high-dimensional image data to a lower-dimensional latent space. We then focus on a specific latent dimension and encourage it to classify different intensities of a particular meteorological feature in a monotonically increasing manner. This is accomplished by incorporating two additional terms into the VAE’s loss function: a classification loss and a positional loss. During training, we optimize the utilization of label information for classification. Remarkably, our results demonstrate that using only 25% of the training data labels is sufficient to train a single pre-selected latent dimension to classify different intensities of a specific meteorological feature. We evaluate the proposed method on two distinct datasets, CARLA and Duckietown, employing two different rain-generation methods. We show that our approach outperforms existing approaches by at least 15 in the F1 score and precision when trained and tested on CARLA dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call