Abstract

Image captioning and visual question answering are typical tasks that connect computer vision and natural language processing. Both of them need to effectively represent the visual content using computer vision methods and smoothly process the text sentence using natural language processing skills. The key problem of these two tasks is to infer the target result based on the interactive understanding of the word sequence and the image. Though they practically use similar algorithms, they are studied independently in the past few years. In this article, we attempt to exploit the mutual correlation between these two tasks. We propose the first VQA-improved image-captioning method that transfers the knowledge learned from the VQA corpora to the image-captioning task. A VQA model is first pretrained on image--question--answer instances. Then, the pretrained VQA model is used to extract VQA-grounded semantic representations according to selected free-form open-ended visual question--answer pairs. The VQA-grounded features are complementary to the visual features, because they interpret images from a different perspective. We incorporate the VQA model into the image-captioning model by adaptively fusing the VQA-grounded feature and the attended visual feature. We show that such simple VQA-improved image-captioning (VQA-IIC) models perform better than conventional image-captioning methods on large-scale public datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call