Abstract

The two-stream CNNs prove very successful for video based action recognition. However the classical two-stream CNNs are time costly, mainly due to the bottleneck of calculating optical flows. In this paper, we propose a two-stream based real-time action recognition approach by using motion vector to replace optical flow. Motion vectors are encoded in video stream and can be extracted directly without extra calculation. However directly training CNN with motion vectors degrades accuracy severely due to the noise and the lack of fine details in motion vectors. In order to relieve this problem, we propose four training strategies which leverage the knowledge learned from optical flow CNN to enhance the accuracy of motion vector CNN. Our insight is that motion vector and optical flow share inherent similar structures which allows us to transfer knowledge from one domain to another. To fully utilize the knowledge learned in optical flow domain, we develop deeply transferred motion vector CNN. Experimental results on various datasets show the effectiveness of our training strategies. Our approach is significantly faster than optical flow based approaches and achieves processing speed of 390.7 frames per second, surpassing real-time requirement. We release our model and code to facilitate further research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call