Abstract

AbstractHand shapes vary for different views or hand rotations. In addition, the high degree of freedom of hand configurations makes it difficult to track hand shape variations. This paper presents a new manifold embedding method that models hand shape variations in different hand configurations and in different views due to hand rotation. Instead of traditional silhouette images, the hand shapes are modeled using depth map images, which provides rich shape information invariant to illumination changes. These depth map images vary for different viewing directions, similar to shape silhouettes. Sample data along view circles are collected for all the hand configuration variations. A new manifold embedding method using a 4D torus for modeling low dimensional hand configuration and hand rotation is proposed to model the product of three circular manifolds. After learning nonlinear mapping from the proposed embedding space to depth map images, we can achieve the tracking of arbitrary shape variations with hand rotation using particle filter on the embedding manifold. The experiment results from both synthetic and real data show accurate estimations of hand rotation through the estimation of the view parameters and hand configuration from key hand poses and hand configuration phases.KeywordsTorus ManifoldHand ShapeHand ConfigurationHand RotationClosed HandThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call