Abstract

Visual relationship detection aims to describe the interactions between pairs of objects, such as <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">person-ride-bike</i> and <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">bike-next to-car</i> triplets. In reality, it is often the case that there exist some groups of strongly correlated relationships, while others are weakly related. Intuitively, the common relationships can be roughly categorized into several types such as geometric (e.g., next to), action (e.g., ride), and so on. However, previous studies ignore the relatedness discovery among multiple relationships, which only lie on a unified space to leverage visual features or statistical dependencies into categories. To tackle this problem, we propose an adaptively clustering-driven network for visual relationship detection, which can implicitly divide the unified relationship space into several subspaces with specific characteristics. Particularly, we propose two novel modules to discover the common distribution space and latent relationship association, respectively, which map pairs of object features into translation subspaces to induce the discriminative relationship clustering. Then, a fused inference is designed to integrate the group-induced representations with the language prior to facilitate the predicate inference. Especially, we design the Frobenius-norm regularization to boost the clustering. To the best of our knowledge, the proposed method is the first supervised framework to realize <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">subject-predicate-object</i> relationship-aware clustering for visual relationship detection. Extensive experiments show that the proposed method can achieve competing performances against the state-of-the-art methods on the Visual Genome dataset. Additional ablation studies further validate its effectiveness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call