Abstract

Traditional garment animation workflow relies on the professional clothing simulator, which requires manual editing of artists or animators. There is no doubt that such a process is time-consuming and laborious. Synthesizing garment dynamics according to the input high-level parameters in a semi-automatic way not only helps dismiss the domain gap between inspiration and technical implementation, but also enables artists to focus on the authoring of animating contents. To that end, a variational auto-encoder-based garment animation synthesis method is presented. Firstly, a set of motion sequences composed of different poses are sampled to generate the human body dataset. Secondly, a variational auto-encoder network is constructed to learn the probabilistic distribution of clothing deformation from garment motions under different pose variations. Besides, a mesh Laplacian term on the loss function is introduced to preserve wrinkle details of the synthesized garments. After that, constraints on the latent space are imposed to control the garment shape to be generated. Finally, a refinement process is employed to resolve the penetration between the body surface and garment mesh, obtaining realistic clothing deformations. Proposed method is qualitatively and quantitatively evaluated on the AMASS dataset from different aspects: body motion/shape-driven garment synthesis, garment animation authoring. The experimental results demonstrate that proposed workflow is able to produce visually realistic garments without noticeable artifacts. Proposed method can produce temporally-consistent garment dynamics with shape and pose variations, which assists artists in authoring the desired clothing deformations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call