Abstract

As a tool to explain deep neural networks using gradient information, Gradient-weighted Class Activation Map (Grad-CAM) provides a potential way for explainable artificial intelligence. However, for vibration signals in machine fault diagnosis, feature resolution of Grad-CAM decreases with the deepening of network layers, which weakens network explainability. To address this issue, a novel Multilayer Grad-CAM (MLG-CAM) is proposed as an effective tool to explain what networks have learned. Meanwhile, three indicators are defined to quantify explainability of deep neural networks. The MLG-CAM uses gradients flow of multiple convolutional layers to obtain activation maps in various resolutions. A comprehensive activation map is then produced by layer-weighted summation of above activation maps. Experiments indicate MLG-CAM not only highlights cyclo-stationary impulses in time domain but also emphasizes fault characteristic frequency in frequency domain. These results prove MLG-CAM as an effective way to explain deep neural networks and build up trustworthiness of networks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.