Abstract

In this paper, we propose a new temporal template approach for action recognition and person identification based on motion sequence information in masked depth video streams obtained from RGB-D data. This new representation creates a membership function that models the change in motion based on the correlation between frames that occur during motion flow. The energy images created with this function emphasize the intervals of motion with more change, while the intervals with less change are suppressed. To understand the distinctive features, the obtained energy images by using the proposed function are given as input to the convolutional neural networks and different handcrafted classifiers. The proposed method was observed on the BodyLogin, NATOPS, and SBU Kinect datasets and compared with the existing temporal templates and recent methods. The results indicate that the proposed method provides both higher performance and better motion representation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.