Abstract

Successful tracking of articulated hand motion is the first step in many computer vision applications such as gesture recognition. However the nonrigidity of the hand, complex background scenes, and occlusion make tracking a challenging task. We divide and conquer tracking by decomposing complex motion into nonrigid motion and rigid motion. A learning-based algorithm for analyzing nonrigid motion is presented. In this method, appearance-based models are learned from image data, and underlying motion patterns are explored using a generative model. Nonlinear dynamics of the articulation such as fast appearance deformation can thus be analyzed without resorting to a complex kinematic model. We approximate the rigid motion as planar motion, which can be approached by a filtering method. We unify our treatments of nonrigid motion and rigid motion into a single, robust Bayesian framework and demonstrate the efficacy of this method by performing successful tracking in the presence of significant occlusion clutter.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.