Abstract

The clothing personalized online customization is required to design around the user shape parameters, and the traditional anthropometry method obtains the user shape parameter with a low speed and a high error. To tackle this issue, a rapid 3D human body reconstruction method is proposed based on multi-perspective silhouettes for clothing personalized customization. The multilevel dilated convolution semantic network (MDS-Net) is leveraged to extract the global and local features in the human silhouettes to implement the semantic segmentation. The torso parameter extraction network (TPE-Net) is leveraged to extract the shape and pose parameters of the multi-perspective human body segmentation maps. The principal component analysis (PCA) is leveraged to extract the semantic features in the latent space of the 3D human body model, and the semantic features are mapped to the shape and posture parameters output by the torso parameter extraction network, thus, 3D human body model is reconstrued. This method was verified in a PyTorch environment with four datasets. The experiments demonstrated that the mIoU of the multilevel dilated convolution semantic network on the test set is 0.881, which can achieve overall segmentation and local detail preservation. The torso parameter extraction network has an accuracy of 0.74 in the shape parameter prediction on the test set, the prediction of joint offset distance is proportional to the index in the kinematic tree. The entire 3D human body reconstruction method is verified by the real cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call