Abstract
Explanation methods for deep neural networks (DNN) such as LRP [1], PatternAttribution [2], LIME [3], DeepLIFT [4] have promoted the explanation of convolutional neural networks (CNN). Results are obtained mostly using ReLU activation function. In this paper, we investigate the performance of explanation methods on neural networks with sigmoid activations like sigmoid and tanh. PatternAttribution is a recent approach which allows learning explanation patterns from data. We show that the saturated zones of sigmoids pose difficulties for PatternAttribution. In order to solve these issues, as a first novelty, we generalize global explanations to piece-wise dependent explanations. In a second contribution, we learn the parameters of PatternAttribution in near-linear activation zones of the sigmoids, while replacing it with LRP in the saturated zones. Finally, we introduce and evaluate direct layer-wise Taylor approximation to show that LRP as a deep-Taylor-motivated approach outperforms the ad-hoc application of Taylor approximation. We show results on MNIST and also on LSTM and GRU networks used for two sentiment classification tasks as important application cases of models using sigmoids. Our results demonstrate that the proposed method, Piece-wise PAtternLRP (PPAP), outperforms PatternAttribution as well as LRP for networks with sigmoids, thus combining their strengths effectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.