Abstract

Relevant objects are widely used for aiding human action recognition in still images. Such objects are founded by a dedicated and pre-trained object detector in all previous methods. Such methods have two drawbacks. First, training an object detector requires intensive data annotation. This is costly and sometimes unaffordable in practice. Second, the relation between objects and humans are not fully taken into account in training. This work proposes a systematic approach to address the two problems. We propose two novel network modules. The first is an object extraction module that automatically finds relevant objects for action recognition, without requiring annotations. Thus, it is free . The second is a human-object relation module that models the pairwise relation between humans and objects, and enhances their features. Both modules are trained in the action recognition network, end-to-end. Comprehensive experiments and ablation studies on three datasets for action recognition in still images demonstrate the effectiveness of the proposed approach. Our method yields state-of-the-art results. Specifically, on the HICO dataset, it achieves 44.9% mAP, which is 12% relative improvement over the previous best result. In addition, this work makes an observational contribution that it is no longer necessary to rely on a pre-trained object detector for this task. Relevant objects can be found via end-to-end learning with only action labels. This is encouraging for action recognition in the wild. Models and code will be released.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call