Abstract

Geometric dynamic configurations of body joints play an essential role in distinguishing different human activities. However, many existing human activity recognition approaches lack the capability of automatically learning these configurations from sequences of joints in four-dimensional space (spatio and temporal). In this paper, the authors propose an automatic joint configuration learning method, based on dictionary learning and sparse representation. The proposed method achieves the following features: 1) it automatically learns dynamic spatio-temporal geometric configurations of body joints, involved in activities, in a simple way; 2) it dispenses with the hand crafted feature designing process and provides a new method to organize joint coordinate data as fixed length column vectors, which are suitable for dictionary learning; 3) it replaces the conventional bag of words model with sparse coding method; words in learned dictionary capture subactivity features, and the frequencies of different words appearing in different activities characterize the categories of global activity; 4) it is robust to time misalignment and can classify any length of video sequence (online classification) in real time; 5) it is easy to combine this method with other forms of data for better performance, because of its data driven nature and flexible framework. The proposed method is tested with three state-of-the-art public human activity recognition datasets and the results are found to be better than those of CAD-60 dataset, and comparable to those of both MSR Action 3D and MSR Daily Activity datasets (source codes are publicly available at https://github.com/jinqijinqi/SparseCodingDictionaryLearningHumanActivityRecognition ).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.