Abstract
The reliable detection and neutralization of unmanned aircraft systems (UASs), known as counter UAS (cUAS), is pivotal in restricted air spaces. The application of deep learning (DL) classifiers on electro-optical (EO) sensor data is promising for cUAS, but it introduces three key challenges. Specifically, DL-based cUAS produces point estimates at test time with no associated measure of uncertainty (softmax outputs produced by typical DL models are often overconfident predictions, resulting in unreliable measures of uncertainty), easily triggers false positive detections for birds and other aerial wildlife, and cannot accurately characterize out-of-distribution (OOD) input samples. In this work, we develop an epistemic uncertainty quantification (UQ) framework, which utilizes the advantages of DL while simultaneously producing uncertainty estimates on both in-distribution and OOD input samples. In this context, in-distribution samples refer to testing samples collected according to the same data generation process as the training data, and OOD samples refer to in-distribution samples that are intentionally perturbed in order to shift the distribution of the testing set away from the distribution of the training set. Our framework produces a distributive estimate of each prediction, which accurately expresses UQ, as opposed to a point estimate produced by standard DL. Through evaluation on a custom field-collected dataset consisting of images captured from EO sensors and in comparison to prior cUAS baselines, we show that our framework effectively expresses low and high uncertainty on in-distribution and OOD samples, respectively, while retaining accurate classification performance.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have