This paper proposes a weighted codebook vector representation and an action graph model for view-invariant human action recognition.A video is represented as a weighted codebook vector combining dynamic interest points and static shapes.This combined representation has strong noise robusticity and high classification performance on static actions.Several 3D key poses are extracted from the motion capture data or points cloud data,and a set of primitive motion segments are generated.A directed graph called Essential Graph is built of these segments according to self-link,forward-link and back-link.Action Graph is generated from the essential graph projected from a wide range of viewpoints.This paper uses Nave Bayes to train a statistical model for each node.Given an unlabeled video,Viterbi algorithm is used for computing the match score between the video and the action graph.The video is then labeled based on the maximum score.Finally,the algorithm is tested on the IXMAS dataset,and the CMU motion capture library.The experimental results demonstrate that this algorithm can recognize the view-invariant actions and achieve high recognition rates.
Read full abstract