Deep neural networks have emerged as highly effective tools for computer vision systems, showcasing remarkable performance. However, the intrinsic opacity, potential biases, and vulnerability to shortcut learning in these models pose significant concerns regarding their practical application. To tackle these issues, this work employs saliency prior and explanations to enhance the credibility, reliability, and interpretability of neural networks. Specifically, we employ a salient object detection algorithm to extract human-consistent priors from images for data augmentation. The identified saliency priors, along with explanations, serve as supervision signals directing the network’s focus to salient regions within the image. Additionally, contrastive self-supervised learning is incorporated to enable the model to discern the most discriminative concepts. Experimental results confirm the algorithm’s capability to align model explanations with human priors, thereby improving interpretability. Moreover, the proposed approach enhances model performance in data-limited and fine-grained classification scenarios. Importantly, our algorithm is label-independent, allowing for the integration of unlabeled data during training. In practice, this method contributes to improving the reliability and interpretability of intelligent models for downstream tasks. Our code is available here: https://github.com/DLAIResearch/SGC.
Read full abstract