Abstract

This paper aims to improve the explainability of autoencoder (AE) predictions by proposing two novel explanation methods based on the mean and epistemic uncertainty of log-likelihood estimates, which naturally arise from the probabilistic formulation of the AE, the Bayesian autoencoder (BAE). These formulations contrast the conventional post-hoc explanation methods for AEs, which incur additional modelling effort and implementations. We further extend the methods for sensor-based explanations, aggregating the explanations at the sensor level instead of the lower feature level.To evaluate the performance of explanation methods quantitatively, we test them on condition monitoring applications. Due to the lack of a common assessment of explanation methods, especially under covariate shift, we propose three evaluation metrics: (1) the G-mean of Spearman drift coefficients, (2) the G-mean of sensitivity-specificity of explanation ranking and (3) a sensor explanation quality index (SEQI) which combines the first two metrics, capturing the explanations’ abilities to measure the degree of monotonicity and to rank the sensors.Surprisingly, we observe that the explanations of BAE’s predictions suffer from high correlation resulting in misleading explanations. This new observation cautions against trusting these explanations without further understanding of when they may fail. To alleviate this, a “Coalitional BAE” is proposed, inspired by agent-based system theory. The Coalitional BAE models each sensor independently and eliminates the correlation in explanations. Our comprehensive experiments on publicly available condition monitoring datasets demonstrate significant improvements of the Coalitional BAEs over the baseline Centralised AEs on the proposed metrics, visualised through critical difference diagrams.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call