Abstract
Among the current human pose estimation methods, the more mature one is the two-dimensional (2D) human pose estimation method. However, the 2D information is difficult to reflect the real posture of the human body in space, and the three-dimensional (3D) estimation algorithm is imperfect and has poor accuracy. In this paper, based on the fusion of binocular stereo vision and convolutional neural network (CNN), a new method of 3D human pose estimation was carried out. In order to verify the accuracy of this method, we used the method in this paper and Microsoft Kinect V2 to capture action images to reconstruct the 3D pose of the human body. The results showed that within the working distance of 4450-4700mm, the minimum root mean square error (RMSE), the minimum mean absolute error (MAE) and the minimum mean absolute percentage error (MAPE) of the action performed by the testers could reach 20.469, 16.408 and 4.508. The length accuracy of human joints restored by this method was higher than that of Kinect V2, and the restoration of human actions in 3D space was higher. It can be concluded from the experimental results that the proposed method might have the ability to reduce the cost and limitation of current human motion capture, and may be applied in the fields of stroke patient rehabilitation and community rehabilitation in the future.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.