Abstract

Visual dialog is a challenging vision-language task in which a series of questions visually grounded by a given image are answered. To resolve the visual dialog task, a high-level understanding of various multimodal inputs (e.g., question, dialog history, and image) is required. Specifically, it is necessary for an agent to (1) determine the semantic intent of question and (2) align question-relevant textual and visual contents among heterogeneous modality inputs. In this paper, we propose Multi-View Attention Network (MVAN), which leverages multiple views about heterogeneous inputs based on attention mechanisms. MVAN effectively captures the question-relevant information from the dialog history with two complementary modules (i.e., Topic Aggregation and Context Matching), and builds multimodal representations through sequential alignment processes (i.e., Modality Alignment). Experimental results on VisDial v1.0 dataset show the effectiveness of our proposed model, which outperforms previous state-of-the-art methods under both single model and ensemble settings.

Highlights

  • We compare the results of our proposed model with previously published results on the VisDial v1.0 dataset for the following methods: Late Fusion (LF) [7], Hierarchical Recurrent Encoder (HRE) [7], Memory Network (MN) [7], Graph Neural Network (GNN) [12], Co-reference Neural Module Network (CorefNMN) [4], Recursive Visual Attention (RVA) [6], Synergistic Network [9], Dual Encoding Visual Dialogue (DualVD) [33], Context-Aware Graph (CAG) [14], History-Aware Co-Attention (HACAN) [34], Consensus Dropout Fusion (CDF) [11], Dual Attention Network (DAN) [5], and Factor Graph

  • For VisDial v1.0, our Multi-View Attention Network (MVAN) model outperforms the previous state-of-the-art methods with respect to NDCG and AVG and shows competitive results in non-NDCG

  • We introduced MVAN for the visual dialog task

Read more

Summary

Introduction

Visual dialog is a challenging vision-language task in which a series of questions visually grounded by a given image are answered. To resolve the visual dialog task, a high-level understanding of various multimodal inputs (e.g., question, dialog history, and image) is required. MVAN effectively captures the question-relevant information from the dialog history with two complementary modules (i.e., Topic Aggregation and Context Matching), and builds multimodal representations through sequential alignment processes (i.e., Modality Alignment). Answering (VQA), image captioning, referring expressions, etc.) have been introduced in recent years. Considerable efforts in this field have advanced the capabilities of artificial intelligence agents a step further, but the agent’s comprehension of multimodal information is still far from human-level reasoning and cognitive ability [1,2]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call