Abstract

In this paper, we present action descriptors that are capable of performing single- and two-person simultaneous action recognition. In order to exploit the shape information of action silhouettes, we detect junction points and geometric patterns at the silhouette boundary. The motion information is exploited by using optical flow points. We compute centroid distance signatures to construct the junction points and optical flow-based action descriptors. By taking advantage of the distinct poses, we extract key frames and construct geometric pattern action descriptor, which is based on histograms of the geometric patterns classes obtained by a distance-based classification method. In order to exploit the shape and motion information simultaneously, we follow the information fusion strategy and construct a joint action descriptor by combining geometric patterns and optical flow descriptors. We evaluate the performance of these descriptors on the two widely used action datasets, i.e., Weizmann dataset (single-person actions) and SBU Kinect interaction dataset, clean and noisy versions (two-person actions). The experimental outcomes demonstrate the ability of the individual descriptors to give satisfactory performance on average. It is found that the joint action descriptor shows the best performance among the proposed descriptors due to its high discriminative power and also outperforms state-of-the-art approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.