Abstract

Text based Visual Question Answering (TextVQA) is a recently raised challenge requiring models to read text in images and answer natural language questions by jointly reasoning over the question, textual information and visual content. Introduction of this new modality - Optical Character Recognition (OCR) tokens ushers in demanding reasoning requirements. Most of the state-of-the-art (SoTA) VQA methods fail when answer these questions because of three reasons: (1) poor text reading ability; (2) lack of textual-visual reasoning capacity; and (3) choosing discriminative answering mechanism over generative couterpart (although this has been further addressed by M4C). In this paper, we propose an end-to-end structured multimodal attention (SMA) neural network to mainly solve the first two issues above. SMA first uses a structural graph representation to encode the object-object, object-text and text-text relationships appearing in the image, and then designs a multimodal graph attention network to reason over it. Finally, the outputs from the above modules are processed by a global-local attentional answering module to produce an answer splicing together tokens from both OCR and general vocabulary iteratively by following M4C.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call