Abstract

The advent of depth sensors has facilitated a variety of visual recognition tasks including human activity understanding. This paper presents a novel feature representation to recognize human activities from video sequences captured by a depth camera. We assemble local neighboring hypersurface normals from a depth sequence to form the polynormal which jointly encodes local motion and shape cues. Fisher vector is employed to aggregate the low-level polynormals into the Polynormal Fisher Vector. In order to capture the global spatial layout and temporal order, we employ a spatio-temporal pyramid to subdivide a depth sequence into a set of space-time cells. Polynormal Fisher Vectors from these cells are combined as the final representation of a depth video. Experimental results demonstrate that our method achieves the state-of-the-art results on the two public benchmark datasets, i.e., MSRAction3D and MSRGesture3D.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call