Abstract

As a crucial part of natural language processing, event-centered commonsense inference task has attracted increasing attention. With a given observed event, the intention and reaction of the people involved in the event are required to be inferred with artificial intelligent algorithms. To solve this problem, sequence-to-sequence methods are widely studied, where the event is first encoded into a specific representation and then decoded to generate the results. However, all the existing methods learn the event representation only with the textual information, while the visual information is ignored, which is actually helpful for the commonsense reference. In this article, we first define a new task of multi-modal commonsense reference with both textual and visual information. A new event-centered multi-modal dataset is also provided. Then we propose a multi-source knowledge reasoning graph network to solve this task, where three kinds of relational knowledge are considered. Multi-modal correlations are learned to get the event’s multi-modal representation from a global perspective. Intra-event object relations are explored to capture the fine-grained event feature with an object graph. Inter-event semantic relations are also explored through the external knowledge to understand the semantic associations among events with an event graph. We conduct extensive experiments on the new dataset, and the results show the effectiveness of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call