Abstract

Explainability in Artificial Intelligence (AI) has become increasingly popular in order to understand model predictions, especially for noisy and uncertain observations close to the decision boundary. Bayesian Neural Networks (BNNs) infer posterior distributions over the weight parameters in order to express uncertainty in high dimensional spaces. As an alternative to exact calculation, we approximate a probabilistic model of the weight parameters in neural networks (NNs) using Automatic Differentiation Variational Inference (ADVI). The computational cost of this approximation is very high and the question is how we can utilize the resulting posterior distributions in the decision-making process. We propose a novel measurement in order to quantify classification uncertainty based on samples from the NN posterior predictive distribution. The purpose of this metric is to identify and describe boundary observations. We compare this novel uncertainty measure - Generative Class Counts (GCC) - with the posterior predictive standard deviation. We introdudce a novel metric which measures the uncertainty measure's ability to separate correctly classified test observations from incorrectly classified ones. To demonstrate the performance of the BNN as well as the GCC uncertainty measurement, we perform image classification on the MNIST handwritten digits data set using a Bayesian Convolutional Neural Network. We show that the GCC uncertainty scored the highest performance in its ability to separate correctly classified test observations from incorrectly classified ones.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call