Abstract

Recently, 3D convolutional networks yield good performance in action recognition. However, an optical flow stream is still needed for motion representation to ensure better performance, whose cost is very high. In this paper, we propose a cheap but effective way to extract motion features from videos utilizing residual frames as the input data in 3D ConvNets. By replacing traditional stacked RGB frames with residual ones, 35.6% and 26.6% points improvements over top-1 accuracy can be achieved on the UCF101 and HMDB51 datasets when trained from scratch using ResNet-18-3D. We deeply analyze the effectiveness of this modality compared to normal RGB video clips, and find that better motion features can be extracted using residual frames with 3D ConvNets. Considering that residual frames contain little information of object appearance, we further use a 2D convolutional network to extract appearance features and combine them together to form a two-path solution. In this way, we can achieve better performance than some methods which even used an additional optical flow stream. Moreover, the proposed residual-input path can outperform RGB counterpart on unseen datasets when we apply trained models to video retrieval tasks. Huge improvements can also be obtained when the residual inputs are applied to video-based self-supervised learning methods, revealing better motion representation and generalization ability of our proposal.

Highlights

  • F OR video understanding tasks such as action recognition, motion representation is an important challenge to extract good motion features among multiple frames

  • We propose an effective strategy based on 3D convolutional networks to pre-process RGB frames to be set as the replacement of traditional input data

  • Because residual frames lack enough information for objects, which are necessary for the compound phrases used for label definitions in some video recognition datasets, we further propose a two-path solution to utilize appearance features as an effective complement for motion features learned from the residual inputs

Read more

Summary

Introduction

F OR video understanding tasks such as action recognition, motion representation is an important challenge to extract good motion features among multiple frames. 3D ConvNet based methods improved the recognition performance by extending 2D convolution kernel to 3D, and computations among temporal axis in each convolutional layers are believed to handle the movements [6]–[11]. In a typical implementation of 3D ConvNets, these methods used stacked RGB frames as the input data.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call