Abstract

Group activity recognition has received a great deal of interest because of its broader applications in sports analysis, autonomous vehicles, CCTV surveillance systems and video summarization systems. Most existing methods typically use appearance features and they seldom consider underlying interaction information. In this work, a technology of novel group activity recognition is proposed based on multi-modal relation representation with temporal-spatial attention. First, we introduce an object relation module, which processes all objects in a scene simultaneously through an interaction between their appearance feature and geometry, thus allowing the modeling of their relations. Second, to extract effective motion features, an optical flow network is fine-tuned by using the action loss as the supervised signal. Then, we propose two types of inference models, opt-GRU and relation-GRU, which are used to encode the object relationship and motion representation effectively, and form the discriminative frame-level feature representation. Finally, an attention-based temporal aggregation layer is proposed to integrate frame-level features with different weights and form effective video-level representations. We have performed extensive experiments on two popular datasets, and both have achieved state-of-the-art performance. The datasets are the Volleyball dataset and the Collective Activity dataset, respectively.

Highlights

  • Research is intensifying in computer vision-driven applications

  • OVERVIEW STRUCTURE Our objective is to explore rich contextual information to construct a group activity recognition framework that explicitly takes relationships between actor-actor and actor-motion information into consideration

  • We fed the previous optical flow representation, relation features, and individual appearance features as inputs to the opt-gated recurrent unit (GRU) and relation-GRU; a 1024d frame-level integrated feature representation was obtained

Read more

Summary

Introduction

Collective activity recognition has made great progress with the development of deep learning [2], [9], [19], [20], [25], [29], [32], [38] They first extract person-level features using a convolutional neural network (CNN). In many application scenarios, such as sports videos, motion information is important for understanding actions. In order to understand what is happening in a scenario involving multiple people, the designed model needs to describe the individual behavior of each person in context and infer the intergroup interaction. Existing work usually considers individual action tags and group activity tags separately, rather than understanding the interaction information between them, modeling the relationship between persons is very challenging.

Objectives
Methods
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.