Abstract

Compared with traditional methods, the action recognition model based on 3D convolutional deep neural network captures spatio-temporal features more accurately, resulting in higher accuracy. However, the large number of parameters and computational requirements of 3D models make it difficult to deploy on mobile devices with limited computing power. In order to achieve an efficient video action recognition model, we have analyzed and compared classic lightweight network principles and proposed the 3D-ShuffleViT network. By deeply integrating the self-attention mechanism with convolution, we have introduced an efficient ACISA module that further enhances the performance of our proposed model. This has resulted in exceptional performance in both context-sensitive and context-independent action recognition, while reducing deployment costs. It is worth noting that our 3D-ShuffleViT network, with a computational cost of only 6% of that of SlowFast-ResNet101, achieved 98% of the latter’s Top1 accuracy on the EgoGesture dataset. Furthermore, on the same CPU (Intel i5-8300H), its speed was 2.5 times that of the latter. In addition, when we deployed our model on edge devices, our proposed network achieved the best balance between accuracy and speed among lightweight networks of the same order.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call