Abstract

Visual Question Answering (VQA) is a multi-modal AI-complete task of answering natural language questions about images. Literature solved VQA with a three-phase pipeline: image and question featurisation, multi-modal feature fusion and answer generation or prediction. Most of the works have given attention to the second phase, where multi-modal features get combined ignoring the effect of individual input features. This work investigates VQA’s natural language question embedding phase by proposing a new question featurisation framework based on Supervised Term Weighting (STW) schemes. In addition, two new STW schemes integrating text semantics, qf.cos and tf.rf.sim, have been introduced to boost the framework’s performance. A series of tests on the DAQUAR VQA dataset is used to compare the new system to conventional pre-trained word embedding. Over the past few years, STW schemes have been commonly used in text classification research. In light of this, tests are carried out to verify the effectiveness of the two newly proposed STW schemes in the general text classification task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call