Abstract
Preserving, maintaining and teaching traditional martial arts are very important activities in social life. That helps preserve national culture, exercise and self-defense for practitioners. However, traditional martial arts have many different postures and activities of the body and body parts are diverse. The problem of estimating the actions of the human body still has many challenges, such as accuracy, obscurity, etc. In this paper, we survey several strong studies in the recent years for 3-D human pose estimation. Statistical tables have been compiled for years, typical results of these studies on the Human 3.6m dataset have been summarized. We also present a comparative study for 3-D human pose estimation based on the method that uses a single image. This study based on the methods that use the Convolutional Neural Network (CNN) for 2-D pose estimation, and then using 3-D pose library for mapping the 2-D results into the 3-D space. The CNNs model is trained on the benchmark datasets as MSCOCO Keypoints Challenge dataset [1], Human 3.6m [2], MPII dataset [3], LSP [4], [5], etc. We final publish the dataset of Vietnamese's traditional martial arts in Binh Dinh province for evaluating the 3-D human pose estimation. Quantitative results are presented and evaluated.This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium provided the original work is properly cited.
Highlights
Estimating and predicting the actions of the human body is a well-studied problem in the robotics and computer vision community. 3-D human pose estimation is applied in many other applications such as sports analysis, evaluation analysis and playing games with 3-D graphics, or in health care and protection
The second method using a sequence of images is the combination of its 2-D pose human estimation and based on geometric transformations/mapping to build the skeleton in the 3D space of the person [7]
The main contributions are: (1) We survey on recent 3D human pose estimation techniques in the recently years by 3-D human pose estimation; (2) We propose a comparative study for 3-D human pose estimation based on the method that uses a single image, they captured MS Kinect sensor v1; (3) We propose measures to evaluate and publish the dataset of Vietnamese's traditional martial arts in Binh Dinh province
Summary
Estimating and predicting the actions of the human body is a well-studied problem in the robotics and computer vision community. 3-D human pose estimation is applied in many other applications such as sports analysis, evaluation analysis and playing games with 3-D graphics, or in health care and protection. The estimation in the 3-D space is very dicult to extract and train the features vector because 3-D data is much more complex than data in 2-D space (image space), or estimate many people in the outdoor environment, noise of data (data missing parts of the human body). Rst is recovering 3-D human pose from a single image; The second is recovering 3-D human pose from a sequence of images [6]. Regarding the rst method 3-D human pose estimation using a single image usually performs 2-D human pose estimation and maps to 3-D space. The second method using a sequence of images is the combination of its 2-D pose human estimation and based on geometric transformations (ane transformations)/mapping to build the skeleton in the 3D space of the person [7]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.