Abstract

Deep Convolutional Neural Networks (CNNs) still lack interpretability and are often treated as miraculous blackbox machines. Therefore, when an intelligent system fails, it is usually difficult to troubleshoot the problems. Among others, these issues can be caused by incorrect decisions of the CNN classifier. The other reason can be selective “blindness” of the CNN - caused by an insufficient generalization of the convolutional feature extractor. To better understand the CNN decisions, methods from the Class Activation Mapping (CAM) family have been introduced. In contrast to CAM techniques, which focus on the model's predictions (thus a classifier), we propose a simple yet informative way to visualize network activation - Network Activation Mapping (NAM). Our method targets the most important part of the CNN - a convolutional feature extractor. Opposed to CAM methods, NAM is class-and classifier-independent and provides insight into what the neural network focuses on during the feature extraction process and what features it finds the most prominent in the examined image. Due to the classifier-independence, it can be used with all CNN models. In our experiments, we demonstrate how the performance of a convolutional feature extractor can be preliminarily evaluated using NAM. We also present results obtained for a simple NAM-based visual attention mechanism, which allows us to filter out less informative regions of the image and facilitates the decision making process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call