Abstract

Deep neural networks (NNs) typically outperform traditional machine learning (ML) approaches for complicated, non-linear tasks. It is expected that deep learning (DL) should offer superior performance for the important non-proliferation task of predicting explosive device con?guration based upon observed optical signature, a task which human experts struggle with. However, supervised machine learning is dif?cult to apply in this mission space because most recorded signatures are not associated with the corresponding device description, or ?truth labels.? This is challenging for NNs, which traditionally require many samples for strong performance. Semi-supervised learning (SSL), low-shot learning (LSL), and uncertainty quanti?cation (UQ) for NNs are emerging approaches that could bridge the mission gaps of few labels and rare samples of importance. NN explainability techniques are important in gaining insight into the inferential feature importance of such a complex model. In this work, SSL, LSL, and UQ are merged into a single framework, a signi?cant technical hurdle not previously demonstrated. Exponential Average Adversarial Training (EAAT) and Pairwise Neural Networks (PNNs) are chosen as the SSL and LSL methods of choice. Permutation feature importance (PFI) for functional data is used to provide explainability via the Variable importance Explainable Elastic Shape Analysis (VEESA) pipeline. A variety of uncertainty quanti?cation approaches are explored: Bayesian Neural Networks (BNNs), ensemble methods, concrete dropout, and evidential deep learning. Two ?nal approaches, one utilizing ensemble methods and one utilizing evidential learning, are constructed and compared using a well-quanti?ed synthetic 2D dataset along with the DIRSIG Megascene.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call