Abstract

Three-dimensional human shape reconstruction is important in many applications, such as virtual or augmented reality (VR/AR), virtual clothing fitting, and healthcare. In this paper, we propose a learning-based method for reconstructing a whole-body point cloud from a single front-view human-depth image. Because actual depth images typically suffer from noise and missing data, an accurate point cloud cannot be reasonably obtained by simply predicting a back-view depth image. To solve this problem, we propose to use convolutional neural networks that not only predict a back-view depth image but also refine the input front-view depth image. To train the networks, we propose a carefully designed method for generating synthetic but realistic human-depth images with noise and missing data. Experiments show that the proposed method is effective for obtaining seamless whole-body point clouds. In addition, the experiments show that the networks trained on the synthetic depth images are ready for application to actual depth images.

Highlights

  • 3D human shape reconstruction plays a central role in many applications, such as virtual or augmented reality (VR/AR), virtual clothes fitting, and healthcare

  • 3D human shape reconstruction plays a central role in many applications, such as VR/AR, virtual clothes fitting, and healthcare

  • Our experiments show that the networks trained with our realistic training data are more effective for obtaining accurate whole-body point clouds from actual depth images

Read more

Summary

Introduction

3D human shape reconstruction plays a central role in many applications, such as VR/AR, virtual clothes fitting, and healthcare. To acquire a 3D human shape model, one can use an active 3D scanner [1], a multi-camera system [2], several RGB-depth (RGB-D) cameras [3]–[6], or a single color or RGB-D camera [7]–[36]. Among these options, a single RGB-D camera has the advantage of no depth ambiguity, which is a fundamental problem when using a single color camera. An RGB-D camera can be installed in narrow places as as a color camera

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call