Abstract

Simulating temporal three-dimensional (3D) deformations of clothing worn by the human body is a core technology in computer graphics that plays a vital role in various fields, such as computer games, animation, and movies. Physics-based simulation and data-driven methods are two mainstream technologies used to generate clothing deformation. However, when it is necessary to quickly generate clothing animations of different body shapes and motions, the existing methods cannot balance efficiency and effectiveness. In this paper, we present a learning-based method, given human body shape and motion, which automatically synthesizes temporal 3D deformations of clothing in real-time. A temporal framework based on the transformer is designed to capture the correlation between clothing deformation and shape features of the moving human body, as well as the frame-level dependency. A feature fusion strategy is an innovation used to fuse the two features of shape and motion. We also perform post-processing on the penetration of clothing and the human body, generating collision-free cloth deformation sequences. To evaluate the method, we build a human motion dataset based on the large-scale public human body dataset AMASS, and further develop a clothing deformation dataset. We qualitatively and quantitatively demonstrate that our approach outperforms existing methods in terms of temporal clothing deformation with variable shape and motion, as well as producing realistic deformation at interactive rates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call