Abstract

This paper focuses on human action recognition in video sequences. A method based on the optical flow estimation is presented, where critical points of the flow field are extracted. Multi-scale trajectories are generated from those points and are characterized in the frequency domain. Finally, a sequence is described by fusing this frequency information with motion orientation and shape information. Experiments show that this method has recognition rates among the highest in the state of the art on the KTH dataset. Contrary to recent dense sampling strategies, the proposed method only requires critical points of motion flow field, thus permitting a lower computation time and a better sequence description. Results and perspectives are then discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call