Abstract

In this paper, we present a hand-motion-based method for simultaneous articulation-model estimation and segmentation of objects in RGB-D images. The hand-motion information is first used to calculate an initial guess of the articulated model (prismatic or revolute joint) of the target object. Subsequently, the hand trajectory is used as a constraint to optimize the articulation parameters during the ICP-based alignment of the sequential point clouds of the object from the RGBD images. Finally, the target regions are selected from the cluster of aligned point clouds that move symmetrically with respect to the detected articulation model. The experimental results demonstrate the robustness of the proposed method for various types of objects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call