Abstract

One challenge in computer vision is the joint reconstruction of deforming objects from colour and depth videos. So far, a lot of research has focused on deformation reconstruction based on colour images only, but as range cameras like the recently released Kinect become more and more common, the incorporation of depth information becomes feasible. In this article, a new method is introduced to track object deformation in depth and colour image data. The tracking is done by translating, rotating, and deforming a prototype of an object such that it fits the depth and colour data best. The prototype can either be cut out from the first depth/colour frame of the input sequence or an already known textured geometry can be used. A NURBS [2] based deformation function allows to decouple the geometrical object complexity from the complexity of the deformation itself, providing a relatively low dimensional space to describe arbitrary ’realistic’ deformations. This is done by first approximating the object surface using a standard NURBS function N and then registering every object vertex to the surface as depicted in figure 1.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call