Abstract

Video visual relation inference refers to the task of automatically detecting the relation triplets between the observed objects in videos with the form of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${ &lt; subject, predicate, object&gt;}$ </tex-math></inline-formula> , which requires correctly labeling each detected object and their interaction predicates. Despite the recent advances in image visual relation detection using deep learning techniques, relation inference in videos remains a challenging topic. On one hand, since the introduction of temporal information, it needs to model the rich spatio-temporal visual information for objects and videos. On the other hand, wild videos are often annotated with incomplete relation triplet tags and a few of them are semantically overlapped. However, previous methods adopt hand-crafted visual features extracted from the trajectories, describing local appearance characteristics of isolated objects. And they treat the problem as a multi-class classification task, which makes the relation tags mutually exclusive. To address the above issues, we propose a novel model, termed Visual-Semantic Relation Network (VSRN). In this network, we leverage three-dimensional convolution kernel to capture spatio-temporal features, and encode global visual features in videos through pooling operation on each time slice. Moreover, the semantic collocations between objects are also incorporated so as to obtain comprehensive representations of the relationships. For relation classification, we treat the problem as a multi-label classification task and regard each tag to be independent to predict various relationships. Additionally, we modify commonly used evaluation metric, video-wise recall, to a pair-wise metric (Roop) for testing the performance of models in predicting multiple relationships for the object pairs, Extensive experimental results on two large-scale datasets demonstrate the effectiveness of our proposed model which significantly outperforms the previous works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call