In recent years, Audio Visual Scene-Aware Dialog (AVSD) has been an active research task in the multimodal dialogue community and has also been a core part of the Dialog System Technology Challenge (DSTC). This task is an extension of conventional visual question answering, where video-relevant answers must be generated taking into account multimodal contextual information from previous dialogue rounds. Despite recent advances in the AVSD task, there are still two major challenges in developing such a system: how to model the multimodal contextual information of multiple rounds of dialogues and how to integrate audio-visual information into the generation of textual responses. To tackle these two challenges, in this paper we propose a novel model, named DialogMCF, which constructs a multimodal context flow model to generate responses that are relevant to video scenes. This proposed context flow modeling can track the dynamics of the topic information across multiple rounds of dialogue history. To achieve an effective fusion of multimodal information, we propose an audio-visual memory network with cross-modality aligned features to model long multimodal dialogue context, and thus enhance the flow modeling. Furthermore, we make attempts to improve the performance of the proposed DialogMCF model with manual descriptions and explore the incorporation of temporal reasoning information. Extensive experiments on the DSTC AVSD datasets show that, compared to a range of baseline methods, the proposed method can yield state-of-art dialogue generation performance on most metrics when integrating the video descriptions.
Read full abstract