Abstract

Geometry-centric shape animation, usually represented as dynamic meshes with fixed connectivity and time-deforming geometry, is becoming ubiquitous in digital entertainment and other relevant graphics applications. However, digital animation with fine details, which requires more diversity of texture on meshed geometry, always consumes a significant amount of storage space, and compactly storing and efficiently transmitting these meshes still remain technically challenging. In this paper, we propose a novel key-frame-based dynamic meshes compression method, wherein we decompose the meshes into the low-frequency and high-frequency parts by applying piece-wise manifold harmonic bases to reduce spatial-temporal redundancy of primary poses and by using deformation transfer to recover high-frequency details. First of all, we partition the animated meshes into several clusters with similar poses, and the primary poses of meshes in each cluster can be characterized as a linear combination of manifold harmonic bases derived from the key-frame of that cluster. Second, we recover the geometric details on each primary pose using the deformation transfer technique which reconstructs the details from the key-frames. Thus, we only need to store a very small number of key-frames and a few harmonic coefficients for compressing time-varying meshes, which would reduce a significant amount of storage in contrast with traditional methods where bases were stored explicitly. Finally, we employ the state-of-the-art static mesh compression method to store the key-frames and apply a second-order linear prediction coding to the harmonics coefficients to further reduce the spatial-temporal redundancy. Our comprehensive experiments and thorough evaluations on various datasets have manifested that, our novel method could obtain a high compression ratio while preserving high-fidelity geometry details and guaranteeing limited human perceived distortion rate simultaneously, as quantitatively characterized by the popular Karni–Gotsman error and our newly devised local rigidity error metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call