Abstract

The recently emerged research of Visual Question Answering (VQA) has become a hot topic in computer vision. A key solution to VQA exists in how to fuse multimodal features extracted from image and question. In this paper, we show that combining visual relationship and attention together achieves more fine-grained feature fusion. Specifically, we design an effective and efficient module to reason complex relationship between visual objects. In addition, a bilinear attention module is learned for question guided attention on visual objects, which allows us to obtain more discriminative visual features. Given an image and a question in natural language, our VQA model learns visual relational reasoning network and attention network in parallel to fuse fine-grained textual and visual features, so that answers can be predicted accurately. Experimental results show that our approach achieves new state-of-the-art performance of single model on both VQA 1.0 and VQA 2.0 datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call