Abstract

In recent years, multiple-choice Visual Question Answering (VQA) has become topical and achieved remarkable progress. However, most pioneer multiple-choice VQA models are heavily driven by statistical correlations in datasets, which cannot perform well on multimodal understanding and suffer from poor generalization. In this paper, we identify two kinds of spurious correlations, i.e., a Vision-Answer bias (VA bias) and a Question-Answer bias (QA bias). To systematically and scientifically study these biases, we construct a new video question answering (videoQA) benchmark NExT-OOD in OOD setting and propose a graph-based cross-sample method for bias reduction. Specifically, the NExT-OOD is designed to quantify models' generalizability and measure their reasoning ability comprehensively. It contains three sub-datasets including NExT-OOD-VA, NExT-OOD-QA, and NExT-OOD-VQA, which are designed for the VA bias, QA bias, and VA&QA bias, respectively. We evaluate several existing multiple-choice VQA models on our NExT-OOD, and illustrate that their performance degrades significantly compared with the results obtained on the original multiple-choice VQA dataset. Besides, to mitigate the VA bias and QA bias, we explicitly consider the cross-sample information and design a contrastive graph matching loss in our approach, which provides adequate debiasing guidance from the perspective of whole dataset, and encourages the model to focus on multimodal contents instead of spurious statistical regularities. Extensive experimental results illustrate that our method significantly outperforms other bias reduction strategies, demonstrating the effectiveness and generalizability of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call