Abstract

For safety and mission critical systems relying on Convolutional Neural Networks (CNNs), it is crucial to avoid incorrect predictions that can cause accident or financial crisis. This can be achieved by quantifying and interpreting the predictive uncertainty. Current methods for uncertainty quantification rely on Bayesian CNNs that approximate Bayesian inference via dropout sampling. This paper investigates different dropout methods to robustly quantify the predictive uncertainty for misclassifications detection. Specifically, the following questions are addressed: In which layers should activations be sampled? Which dropout sampling mask should be used? What dropout probability should be used? How to choose the number of ensemble members? How to combine ensemble members? How to quantify the classification uncertainty? To answer these questions, experiments were conducted on three datasets using three different network architectures. Experimental results showed that the classification uncertainty is best captured by averaging the predictions of all stochastic CNNs sampled from the Bayesian CNN and by validating the predictions of the Bayesian CNN with three uncertainty measures, namely the predictive confidence, predictive entropy and standard deviation thresholds. The results showed further that the optimal dropout method specified through the sampling location, sampling mask, inference dropout probability, and number of stochastic forward passes depends on both the dataset and the designed network architecture. Notwithstanding this, I proposed to sample inputs to max pooling layers with a cascade of Multiplicative Gaussian Mask (MGM) followed by Multiplicative Bernoulli Spatial Mask (MBSM) to robustly quantify the classification uncertainty, while keeping the loss in performance low.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.