Abstract

Segmentation and classification of clothes in real 3D data are particularly challenging due to the extreme variation of their shapes, even among the same cloth category, induced by the underlying human subject. Several data-driven methods try to cope with this problem. Still, they must face the lack of available data to generalize to various real-world instances. For this reason, we present GIM3D plus (Garments In Motion 3D plus), a synthetic dataset of clothed 3D human characters in different poses. A physical simulation of clothes generates the over 5000 3D models in this dataset with different fabrics, sizes, and tightness, using animated human avatars representing different subjects in diverse poses. Our dataset comprises single meshes created to simulate 3D scans, with labels for the separate clothes and the visible body parts. We also provide an evaluation of the use of GIM3D plus as a training set on garment segmentation and classification tasks using state-of-the-art data-driven methods for both meshes and point clouds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call