Abstract

Tracking and identifying articulated objects have received growing attention in computer vision in the past decade. In market-based optical motion capture (MoCap) systems, an articulated movement of near-rigid segments is represented via a sequence of moving dots of known 3D coordinates, corresponding to the captured marker positions. We propose a segment-based articulated model-fitting algorithm to address the problem of self-initializing identification and pose estimation utilizing one frame of data in such point-feature tracking systems. It is ultimately crucial for recovering the complete motion sequence. Experimental results, based on synthetic pose and real-world human motion capture data, demonstrate the performance of the algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call