Abstract

Depth motion maps (DMM), containing abundant information on appearance and motion, are captured from the absolute difference between two consecutive depth video sequences. In this paper, each depth frame is first projected onto three orthogonal planes (front, side, top). Then the DMMf, DMMs and DMMt are generated under the three projection view respectively. In order to describe DMM in local and global, histogram of oriented gradient (HOG), local binary patterns (LBP), a local Gist feature description based on a dense grid are computed respectively. Considering the advantages of features fusion and information entropy quantitative evaluation of the Principal Component Analysis (PCA), three descriptors are weighted and fused based on information entropy improved PCA to represent the depth video. A reconstruction error adaptively weighted combination collaborative classifier based on $l$ 1 -norm and $l$ 2 -norm is employed for action recognition, the adaptively weights are determined by Entropy Method. Experimental results on MSR Action3D dataset show that the present approach has strong robustness, discriminability and stability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call