Abstract

Visual question answering has been greatly advanced by deep learning technologies, but still remains an open problem subjected to two aspects of factors. First, previous works estimate the correctness of each candidate answer mainly by its semantic correlations with visual questions, overlooking the fact that some questions and their answers are semantically inconsistent. Second, previous works that require external knowledge mainly uses the knowledge facts retrieved by key words or visual objects. However, the retrieved knowledge facts may only be related to the semantics of the question, but are useless or even misleading for answer prediction. To address these issues, we investigate how to capture the purpose of visual questions and propose a Purpose Guided Visual Question Answering model, called PGVQA. It mainly has two appealing properties: (1) It can estimate the correctness of candidate answers based on the Question Purpose (QP) that reveals which aspects of the concept are examined by visual questions. This is helpful for avoiding the negative effect of the semantic inconsistency between answers and questions. (2) It can incorporate the knowledge facts accordant with the QP into answer prediction, which helps to improve the probability of answering visual questions correctly. Empirical studies on benchmark datasets show that PGVQA achieves state-of-the-art performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.