Abstract

This paper analyzes the predictions of image captioning models with attention mechanisms beyond visualizing the attention itself. We develop variants of Layer-wise Relevance Propagation (LRP) and gradient-based explanation methods, tailored to image captioning models with attention mechanisms. We compare the interpretability of attention heatmaps systematically against the explanations provided by explanation methods such as LRP, Grad-CAM, and Guided Grad-CAM. We show that explanation methods provide simultaneously pixel-wise image explanations (supporting and opposing pixels of the input image) and linguistic explanations (supporting and opposing words of the preceding sequence) for each word in the predicted captions. We demonstrate with extensive experiments that explanation methods (1) can reveal additional evidence used by the model to make decisions compared to attention; (2) correlate to object locations with high precision; (3) are helpful to “debug” the model, e.g. by analyzing the reasons for hallucinated object words. With the observed properties of explanations, we further design an LRP-inference fine-tuning strategy that reduces the issue of object hallucination in image captioning models, and meanwhile, maintains the sentence fluency. We conduct experiments with two widely used attention mechanisms: the adaptive attention mechanism calculated with the additive attention and the multi-head attention mechanism calculated with the scaled dot product.

Highlights

  • Image captioning is a setup that aims at generating text descriptions from image representations

  • Attentions are usually visualized as attention heatmaps, indicating which parts of the image are related to the generated words

  • Attention heatmaps are usually considered as the qualitative evaluations of image captioning models in addition to the quantitative evaluation metrics such as BLEU [16], METEOR [17], ROUGE-L [18], CIDEr [19], SPICE [20]

Read more

Summary

INTRODUCTION

Image captioning is a setup that aims at generating text descriptions from image representations. To gain more insights into the image captioning models, we adapt layer-wise relevance propagation (LRP) and gradientbased explanation methods (Grad-CAM, Guided Grad-CAM [21], and GuidedBackpropagation [22]) to explain image captioning predictions with respect to the image content and the words of the sentence generated so far. We quantitatively measure and compare the properties of explanation methods and attention mechanisms, including tasks of finding the related features/evidence for model decisions, grounding to image content, and the capability of debugging the models (in terms of providing possible reasons for object hallucination and differentiating hallucinated words). We propose an LRP-inference fine-tuning strategy that reduces object hallucination and guides the models to be more precise and grounded on image evidence when predicting frequent object words.

Image Captioning
Towards de-biasing visual-language models
Explanation-guided training
Notations for image captioning models
Attention mechanisms used in this study
EXPLANATION METHODS FOR IMAGE CAPTIONING
Model preparation and implementation details
Explanation results and evaluation
Grad-CAM
Reducing object hallucination with explanation
Discussion and outlook
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.