Abstract

In video action recognition based on deep learning, the design of the neural network is focused on how to acquire effective spatial information and motion information quickly. This paper proposes a kind of deep network that can obtain both spatial information and motion information in video classification. It is called MDFs (the multidimensional motion features of deep feature map net). This method can be used to obtain spatial information and motion information in videos only by importing image frame data into a neural network. MDFs originate from the definition of 3D convolution. Multiple 3D convolution kernels with different information focuses are used to act on depth feature maps so as to obtain effective motion information at both spatial and temporal. On the other hand, we split the 3D convolution at space dimension and time dimension, and the spatial network feature map has reduced the dimensions of the original frame image data, which realizes the mitigation of computing resources of the multichannel grouped 3D convolutional network. In order to realize the region weight differentiation of spatial features, a spatial feature weighted pooling layer based on the spatial-temporal motion information guide is introduced to realize the attention to high recognition information. By means of multilevel LSTM, we realize the fusion between global semantic information acquisition and depth features at different levels so that the fully connected layers with rich classification information can provide frame attention mechanism for the spatial information layer. MDFs need only to act on RGB images. Through experiments on three universal experimental datasets of action recognition, UCF10, UCF11, and HMDB51, it is concluded that the MDF network can achieve an accuracy comparable to two streams (RGB and optical flow) that requires the import of both frame data and optical flow data in video classification tasks.

Highlights

  • Video-based action recognition technology has been developed for decades

  • It is a technology of understanding and classifying video content based on computers. is paper uses the method of deep learning for action recognition. e convolutional neural network (CNN) has a certain degree of translation invariance and scale invariance, and its computing mode is very similar to the visual system of mammals, so CNN has made great achievements in the field of image classification [1,2,3,4,5]

  • MD3DCNN is used for local motion information extraction based on spatial depth features

Read more

Summary

Introduction

Video-based action recognition technology has been developed for decades. It is a technology of understanding and classifying video content based on computers. Is paper uses the method of deep learning for action recognition. Since a video consists of multiple images, CNN has diversified applications in the field of videobased action recognition [5,6,7,8]. The success of video-based action recognition mainly depends on the effective acquisition of temporal information of videos. In video-based action recognition based on deep learning, the design of the neural network structure is focused on how to acquire temporal information based on spatial information.

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.