Abstract

This paper aims to address the human action recognition issue by using convolutional long short-term memory networks (Conv-LSTM) and fully-connected LSTM (FC-LSTM) with different attentions. To this end, the spatial-temporal dual-attention network (STDAN), which is mainly composed of feature extraction, attention and fusion modules, is designed. Different from the features of high-level fully-connected layer mostly used in previous work, the features of convolution and fully-connected layers of convolutional neural network (CNN) are both extracted in STDAN, which can enrich the initial level of video representation. Besides, the Conv-LSTM and FC-LSTM are employed to handle the long-duration sequential features with different temporal context information. To reinforce the spatial-temporal attention ability, a temporal attention module (TAM) and a joint spatial-temporal attention module (JSTAM) are implemented. Through the principle components analysis (PCA) and features fusion, the potential of STDAN is effectively explored and weighted. Finally, the experimental results show that the proposed STDAN has better recognition performance than existing state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call