Abstract

Human–Object Interaction (HOI) detection is a crucial problem for comprehensive visual understanding, which aims to detect <human,action,object> triplets within an image. Many existing methods often exploit to integrate the human and object visual features , the spatial layout of human–object pairs, human poses, contextual information, and even object semantic information into a framework to infer the interactions, proving that all these components can contribute to improve the HOI detection. However, most methods simply concatenate these components that are not explicitly embedded in the feature learning for HOI detection. In this paper, we are trying to fuse these components explicitly using a multi-stream feature refinement network. The network extracts the visual features of humans, contexts, and objects, which receives the attentions from human poses, spatial configurations, and semantic prior knowledge of objects to refine these visual features, respectively. In addition, an additional graph neural network is employed here to learn the structural features of human–object pairs. We verify our method on V-COCO and HICO-DET datasets with extensive experiments. The experimental results demonstrate that our method is a simple yet effective for HOI detection, achieving superior performance to those state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call