Abstract

Marker-less skeleton tracking methods are being widely used for applications such as computer animation, human action recognition, human robot collaboration and humanoid robot motion control. Regarding robot motion control, using the humanoid’s 3D camera and a robust and accurate tracking algorithm, vision based tracking could be a wise solution. In this paper we quantitatively evaluate two vision based marker-less skeleton tracking algorithms (the first, Igalia’s Skeltrack skeleton tracking and the second, an adaptable and customizable method which combines color and depth information from the Kinect.) and perform comparative analysis on upper body tracking results. We have generated a common dataset of human motions by synchronizing an XSENS 3D Motion Capture System, which is used as a ground truth data and a video recording from a 3D sensor device. The dataset, could also be used to evaluate other full body skeleton tracking algorithms. In addition, sets of evaluation metrics are presented.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.