Abstract

A novel Multiple Context Learning Network (MCLN) is proposed to model multiple contexts for visual question answering (VQA), aiming to learn comprehensive contexts. Three kinds of contexts are discussed and the corresponding three context learning modules are proposed based on a uniform context learning strategy. Specifically, the proposed context learning modules are visual context learning module (VCL), textual context learning module (TCL), and visual-textual context learning module (VTCL). The VCL and TCL, respectively, learn the context of objects in an image and the context of words in a question, allowing object and word features to own intra-modal context information. The VTCL is performed on the concatenated visual-textual features that endows the output features with synergic visual-textual context information. These modules work together to form a multiple context learning layer (MCL) and MCL can be stacked in depth for deep context learning. Furthermore, a contextualized text encoder based on the pretrained BERT is introduced and fine-tuned, which enhances the textual context learning at the feature extraction stage of text. The approach is evaluated by using two benchmark datasets: VQA v2.0 dataset and GQA dataset. The MCLN achieves 71.05% and 71.48% overall accuracy on the test-dev and test-std sets of VQA v2.0, respectively. And an accuracy of 57.0% is gained by the MCLN on the test-standard split of GQA dataset. The MCLN outperforms the previous state-of-the-art models and the extensive ablation studies examine the effectiveness of the proposed method.

Highlights

  • An artificial intelligence agent must be able to understand the semantics of text and the content of images

  • Aforementioned attention-based approaches have dramatically ameliorated the performance of visual question answering (VQA), the context information on image and question are not considered by these methods

  • We propose a novel Multiple Context Learning Network (MCLN) to learn comprehensive contexts for VQA

Read more

Summary

Introduction

An artificial intelligence agent must be able to understand the semantics of text and the content of images. LGCN learned the deep visual context representation through iterative message passing the conditioned textual input on the fully connected image object graph [19] These models only consider the context for image modality. Due to longterm dependence between words, the RNN cannot sufficiently capture the textual context at the textual features extraction stage To tackle these issues, we propose a novel Multiple Context Learning Network (MCLN) to learn comprehensive contexts for VQA. (3) To further consider more comprehensive context learning, we adopt the pretrained BERT as a contextualized text encoder and fine-tune its learning rate during the training of MCLN, enhancing the textual context at the textual features extraction stage.

Multiple Context Learning Networks for VQA
Image and Question Representation
Experiment
Experimental Setup
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call