Abstract
In recent years, human activity recognition from video has been getting considerable research attentions by computer vision researchers due to its prominent applications in various fields such as surveillance environments, human computer interactions, and smart home healthcare. For instance, activity recognition can be used in a surveillance environment to alert the related authority of potential dangerous behaviors. Similarly, the activity recognition can improve the human computer interaction (HCI) in an entertainment environment such as the automatic recognition of different player's actions in a game so as to create an avatar to play on behalf for the player. Furthermore, the activity recognition can help the rehabilitation of patients in a healthcare system where patient's action recognition can help to facilitate the rehabilitation processes. Basically, a video-based activity recognition system consists of many prominent goals, one of which is to provide information based on people's behavior in order to allow the system to proactively assist them with their tasks. A novel approach is proposed here for depth video based human activity recognition, using joint-based spatiotemporal features of depth body shapes and hidden Markov models. From depth video, different body parts of human activities are first segmented using a trained random forest. Spatial features consisting of the 3-D body joint pair angles, the mean of the depth values, the variance of the depth values, and the area of each segmented body part are combined with the motion features representing the magnitude and direction of each joint in the next frame to build the spatiotemporal features in a frame. The activity features are then further enhanced using generalized discriminant analysis to classify them nonlinearly in order to convert them to more robust features. Finally, the features are utilized for training distinguished activity hidden Markov models that can be later used for recognition. The proposed approach shows superior recognition performance compared to other conventional activity recognition approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.