3D Clothed Human Reconstruction From One In-the-Wild RGB Image

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

In recent years, much achievement have been made in the field of 3D clothed human reconstruction. However, most of researches performed not well for reconstruction from in-the-wild images due to the domain gap between the synthetic images of training datasets and the in-the-wild images. In this study, a modular model, including clothes encoder, body encoder and cloth generator, is proposed to perform 3D clothed human reconstruction from one single-view in-the-wild RGB image. In particular, we introduce the adaptive aggregation of convolution and multi-head attention into the cloth encoder and apply the adjustment of the segmentation at the preprocessing stage. According to experiments on MSCOCO and 3DPW datasets, the proposed method achieves state-of-the-art performance on 3D clothed human reconstruction from in-the-wild images compared with previous works.

Save Icon
Up Arrow
Open/Close