Abstract

Video visual relation inference aims at extracting the relation triplets in the form of < <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">subject-predicate-object</i> > in videos. With the development of deep learning, existing approaches are designed based on data-driven neural networks. But the datasets are always biased in terms of objects and relation triplets, which make relation inference challenging. Existing approaches often describe the relationships from visual, spatial, and semantic characteristics. The semantic description plays a key role to indicate the potential linguistic connections between objects, that are crucial to transfer knowledge across relationships, especially for the determination of novel relations. However, in these works, the semantic features are not emphasized, but simply obtained by mapping object labels, which can not reflect sufficient linguistic meanings. To alleviate the above issues, we propose a novel network, termed Concept-Enhanced Relation Network (CERN), to facilitate video visual relation inference. Thanks to the attributes and linguistic contexts implied in concepts, the semantic representations aggregated with related concept knowledge of objects are of benefit to relation inference. To this end, we incorporate retrieved concepts with local semantics of objects via the gating mechanism to generate the concept-enhanced semantic representations. Extensive experimental results show that our approach has achieved state-of-the-art performance on two public datasets: ImageNet-VidVRD and VidOR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call