Abstract

Visual dialog is a task that two agents: Question-BOT (Q-BOT) and Answer-BOT (A-BOT), which communicate in natural language on the situation of information asymmetry. Q-BOT generates questions based on an image caption and a historical dialog. A-BOT answers the questions grounded on the image. Moreover, we play a cooperative ‘image guessing’ game between Q-BOT and A-BOT, so that Q-BOT can select an unseen image from a set of images. However, as the valid information of the image caption and the historical dialog fades along the interaction, existing methods usually generate irrelevant and homogenous questions, which are worthless to the visual dialog system. To tackle this issue, we propose an Attentive Memory Network (AMN) to fully exploit the image caption and historical dialog information. Specifically, the attentive memory network mainly consists of a memory network and a fusion module. The memory network holds long term historical dialog information and gives each round of the dialog a different weight. Aside from the historical dialog information, the fusion module in Q-BOT and A-BOT further uses the image caption and the image feature, respectively. The caption information assists Q-BOT with the attentive generation of the questions, and the image feature helps A-BOT produce precise answers. With the AMN, the generated questions are diverse and concentrated, and the corresponding answers are accurate. The experimental results on VisDial v1.0 show the effectiveness of our proposed model, which outperforms the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call