Abstract

Due to the rich spatio-temporal visual content and complex multimodal relations, Video Question Answering (VideoQA) has become a challenging task and attracted increasing attention. Current methods usually leverage visual attention, linguistic attention, or self-attention to uncover latent correlations between video content and question semantics. Although these methods exploit interactive information between different modalities to improve comprehension ability, inter- and intra-modality correlations cannot be effectively integrated in a uniform model. To address this problem, we propose a novel VideoQA model called Cross-Attentional Spatio-Temporal Semantic Graph Networks (CASSG). Specifically, a multi-head multi-hop attention module with diversity and progressivity is first proposed to explore fine-grained interactions between different modalities in a crossing manner. Then, heterogeneous graphs are constructed from the cross-attended video frames, clips, and question words, in which the multi-stream spatio-temporal semantic graphs are designed to synchronously reasoning inter- and intra-modality correlations. Last, the global and local information fusion method is proposed to coalesce the local reasoning vector learned from multi-stream spatio-temporal semantic graphs and the global vector learned from another branch to infer the answer. Experimental results on three public VideoQA datasets confirm the effectiveness and superiority of our model compared with state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call