Abstract
Neural networks are considered a black-box model as their strength in modeling complex interactions makes its operation almost impossible to explain. Still, neural networks remain very interesting tools as they have shown promising performance in various classification tasks. Layer-wise relevance propagation is a technique that, based on a propagation approach, is able to explain the predictions obtained by a neural network. In this work, we propose four adaptations of this technique to operate on multi-label neural networks. The proposed methods provide new ways of distributing the relevance between the output layer and the preceding ones. The efficacy of these adaptations is demonstrated after an experimental study. The study is carried out based on existing evaluation criteria in the literature that measure the explanation's quality. These methods are applied to a case study in which a neural network is used to detect secondary coinfections in patients infected with SARS-CoV-2. Overall, the proposed methods provide a post-hoc interpretability stage of the results.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have