Abstract

Abstract Human action recognition from RGBD videos has attracted much attention recently in the area of computer vision. Mainstream methods focus on designing highly discriminative features, which suffer from high dimension. As for human experience, discriminative parts, such as hands or legs, play an important role for identifying human actions. Motivated by this phenomenon, we propose a Random Forest (RF) Out-of-Bag (OB) estimation based approach to extract discriminative parts for each action. First, all the features of joint-based parts are separately fed into the RF Classifier. The OB estimation of each part is used to evaluate the discrimination of the joints in the part. Second, joints with high discrimination for the whole dataset are selected to design feature. Therefore, feature dimension is reduced efficiently. Experiments conducted on MSR Action 3D and MSR Daily Activity3D dataset show that our proposed approach outperforms state-of-the-art methods in accuracy with lower feature dimensions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call