Abstract
The research introduces a unique deep-learning-based technique for remote rehabilitative analysis of image-captured human movements and postures. We present a ploninomial Pareto-optimized deep-learning architecture for processing inverse kinematics for sorting out and rearranging human skeleton joints generated by RGB-based two-dimensional (2D) skeleton recognition algorithms, with the goal of producing a full 3D model as a final result. The suggested method extracts the entire humanoid character motion curve, which is then connected to a three-dimensional (3D) mesh for real-time preview. Our method maintains high joint mapping accuracy with smooth motion frames while ensuring anthropometric regularity, producing a mean average precision (mAP) of 0.950 for the task of predicting the joint position of a single subject. Furthermore, the suggested system, trained on the MoVi dataset, enables a seamless evaluation of posture in a 3D environment, allowing participants to be examined from numerous perspectives using a single recorded camera feed. The results of evaluation on our own self-collected dataset of human posture videos and cross-validation on the benchmark MPII and KIMORE datasets are presented.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.