Abstract

In this paper, we study the problem of human action recognition from multiple feature modalities. We propose bimodal hybrid centroid canonical correlation analysis (BHCCCA) and multimodal hybrid centroid canonical correlation analysis (MHCCCA) to learn the discriminative and informative shared space, by considering the correlation among different classes across two modalities (BHCCCA) and three or more modalities (MHCCCA). We then introduce a new human action recognition framework by using BHCCCA/MHCCCA for fusing different modalities (RGB, depth, skeleton, and accelerometer data). Performance evaluation on four publicly accessible data sets (MSR Action3D, UTD-MHAD, UTD-MHAD-Kinect V2, and Berkeley MHAD) demonstrated the effectiveness of the proposed framework.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call