Abstract

Based on depth information, this letter introduces a new local depth map feature describing local spatiotemporal details of human motion and a collaborative representation for classification with regularized least squares. By extracting a multilayered depth motion feature and then applying a multiscale Histograms of Oriented Gradient (HOG) descriptor to it, the proposed feature characterizes the local temporal change of human motion and the local spatial structure (appearance) of an action. Instead of class-specific dictionary, the test action sample is represented collaboratively by the common shared dictionary. Moreover, we present an analytical solution of collaborative representation, which is independent of the query and can be precalculated as a projection matrix, leading to low computational cost in recognition. The evaluations on MSRAction3D and MSRGesture3D datasets demonstrate its effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call