Abstract

AbstractThis paper presents a new technique which makes use of deformation and motion properties between animated meshes for finding their spatial correspondences. Given a pair of animated meshes exhibiting a semantically similar motion, we compute a sparse set of feature points on each mesh and compute spatial correspondences among them so that points with similar motion behavior are put in correspondence. At the core of our technique is our new, dynamic feature descriptor named AnimHOG, which encodes local deformation characteristics. AnimHOG is ob‐tained by computing the gradient of a scalar field inside the spatiotemporal neighborhood of a point of interest, where the scalar values are obtained from the deformation characteristic associated with each vertex and at each frame. The final matching has been formulated as a discreet optimization problem that finds the matching of each feature point on the source mesh so that the descriptor similarity between the corresponding feature pairs as well as compatibility and consistency as measured across the pairs of correspondences are maximized. Consequently, reliable correspondences can be found even among the meshes of very different shape, as long as their motions are similar. We demonstrate the performance of our technique by showing the good quality of matching results we obtained on a number of animated mesh pairs.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.