Abstract

• Action-recognition using two-stream deep neural networks using LSTMs. • LSTM for fusion and modeling of deep features based spatial and temporal streams. • We show the efficacy of LSTM learned temporal streams in two-stream network. • Average prediction accuracy of 93.1%, 71.3% and 74.6% on the UCF101, HMDB51 and Kinetics400 datasets. The paper investigates the Long short term memory (LSTM) networks for human action recognition in videos. In spite of significant progress in the field, recognizing actions in real-world videos is a challenging task due to the spatial and temporal variations within and across video clips. We propose a novel two-stream deep network for action recognition by applying the LSTM for learning the fusion of spatial and temporal feature streams. The LSTM type of Recurrent neural network by design possess unique capability to preserve long range context in temporal streams. The proposed method capitalizes on LSTMs memory attribute to fuse the input streams in high-dimensional space exploring the spatial and temporal correlations. The temporal stream input is defined on the LSTM learned deep features summarizing the input frame sequence. Our approach of combining the convolutional features based spatial stream and the deep features based temporal stream in LSTM network efficiently captures the long range temporal dependencies in video streams. We perform primary evaluation of the proposed approach on UCF101, HMBD51 and Kinetics400 datasets achieving competitive recognition accuracy of 93.1%, 71.3% and 74.6% respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.