Abstract

AbstractWe propose a 3D+time framework for modeling dynamic sequences of 3D facial shapes, representing realistic non‐rigid motion during a performance. Our work extends neural 3D morphable models by learning a motion manifold using a transformer architecture. More specifically, we derive a novel transformer‐based autoencoder that can model and synthesize 3D geometry sequences of arbitrary length. This transformer naturally determines frame‐to‐frame correlations required to represent the motion manifold, via the internal self‐attention mechanism. Furthermore, our method disentangles the constant facial identity from the time‐varying facial expressions in a performance, using two separate codes to represent neutral identity and the performance itself within separate latent subspaces. Thus, the model represents identity‐agnostic performances that can be paired with an arbitrary new identity code and fed through our new identity‐modulated performance decoder; the result is a sequence of 3D meshes for the performance with the desired identity and temporal length. We demonstrate how our disentangled motion model has natural applications in performance synthesis, performance retargeting, key‐frame interpolation and completion of missing data, performance denoising and retiming, and other potential applications that include full 3D body modeling.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.