Abstract

The performance of visual question answering(VQA) has witnessed great progress over the last few years. However, many current VQA models tend to rely on superficial linguistic correlations between questions and answers, often failing to sufficiently learn multi-modal knowledge from both vision and language, and thus suffering significant performance drops. To address this issue, the VQA-CP v2.0 dataset was developed to reduce language biases by greedily re-partitioning the distribution of VQA v2.0’s training and test sets. According to the fact that achieving high performance on real-world datasets requires effective learning from minor classes, in this paper we analyze the presence of skewed long-tail distributions in the VQA-CP v2.0 dataset and propose a new ensemble-based parameter-insensitive framework. This framework is built on two representation learning branches and a joint learning block, which are designed to reduce language biases in VQA tasks. Specifically, the representation learning branches can ensure the superior representative ability learned from the major and minor classes. The joint learning block forces the model to initially concentrate on major classes for robust representation and then gradually shifts its focus towards minor classes for classification during the training progress. Experimental results demonstrate that our approach outperforms the state-of-the-art works on the VQA-CP v2.0 dataset without requiring additional annotations. Notably, on the “num” type, our framework exceeds the second-best method (without extra annotations) by 8.64%. Meanwhile, our approach does not sacrifice accuracy performance on the VQA v2.0 dataset compared with the baseline model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.