Abstract

Temporal action detection is a challenging task in video understanding, which is usually divided into two stages: proposal generation and classification. Learning proposal features is a crucial step for both stages. However, most methods ignore temporal information of proposals and consider background and action frames in proposals equally, leading to poor proposal features. In this paper, we propose a novel Temporal Attention-Pyramid Pooling (TAPP) method to learn proposal features of arbitrary length action proposals. The TAPP method exploits the attention mechanism to focus on the discriminative part of proposals, suppressing background influence on proposal features. It constructs a temporal pyramid structure to convert arbitrary length proposal feature sequences to multiple fixed-length sequences while retaining the temporal information. In the TAPP method, we design a multi-scale temporal function and apply it to the temporal pyramid to generate final proposal features. Based on the TAPP method, we construct a temporal action proposal generation model and an action proposal classification model, and then we perform extensive experiments on two mainstream temporal action detection datasets for the temporal action proposal and temporal action detection tasks to verify our models. On the THUMOS'14 dataset, our models based on the TAPP significantly outperform the previous state-of-the-art methods for both tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call