Abstract

The vulnerability of RGB-based human action recognition in complex environment and variational scenes can be compensated by skeleton modality. Therefore, action recognition methods fusing RGB and skeleton modalities have received increasing attention. However, the recognition performance of the existing methods is still not satisfactory due to the insufficiently optimized sampling, modeling and fusion strategy, even the computational cost is heavy. In this paper, we propose a Dense-Sparse Complementary Network (DSCNet), which aims to leverage the complementary information of the RGB and skeleton modalities at light computational cost to obtain the competitive action recognition performance. Specifically, we first adopt dense and sparse sampling strategies according to the advantages of RGB and skeleton modalities, respectively. And then, we use the skeleton as guiding information to crop the key active region of the persons in the RGB frame, which largely eliminates the interference of the background. Moreover, a Short-Term Motion Extraction Module (STMEM) is proposed to compress the densely sampled RGB frames to fewer frames before feeding them into the backbone network, which avoids a surge in computational cost. And a Sparse Multi-Scale Spatial–Temporal convolutional neural Network (Sparse-MSSTNet) is designed to modeling sparse skeleton. Extensive experiments show that our method effectively combines complementary information of RGB and skeleton modalities to improve recognition accuracy. The DSCNet achieves competitive performance on NTU RGB+D 60, NTU RGB+D 120, PKU-MMD, UAV-human, IKEA ASM and Northwest-UCLA datasets with much less computational cost than exiting methods. The code is available at https://github.com/Maxchengqin/DSCNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call