Abstract

Object 3D reconstruction from a single-view image is an ill-posed problem. Inferring the self-occluded part of an object makes 3D reconstruction a challenging and ambiguous task. In this paper, we propose a novel neural network for generating a 3D-object point cloud model from a single-view image. The proposed network named 3D-ReConstnet, an end to end reconstruction network. The 3D-ReConstnet uses the residual network to extract the features of a 2D input image and gets a feature vector. To deal with the uncertainty of the self-occluded part of an object, the 3D-ReConstnet uses the Gaussian probability distribution learned from the feature vector to predict the point cloud. The 3D-ReConstnet can generate the determined 3D output for a 2D image with sufficient information, and 3D-ReConstnet can also generate semantically different 3D reconstructions for the self-occluded or ambiguous part of an object. We evaluated the proposed 3D-ReConstnet on ShapeNet and Pix3D dataset, and obtained satisfactory improved results.

Highlights

  • Reconstructing the shape of 3D objects from a single-view is the fundamental task of robot navigation and grasping, CAD, virtual reality and so on

  • Voxel representation suffers from two problems: sparse information and high computational complexity, especially in high resolution 3D object processing

  • The experimental results show that 3D-ReConstnet outperforms the state-of-art reconstruction methods in the task of single view 3D reconstruction

Read more

Summary

Introduction

Reconstructing the shape of 3D objects from a single-view is the fundamental task of robot navigation and grasping, CAD, virtual reality and so on. Data-driven 3D object reconstruction has attracted more and more attention. There are two kinds of 3D object representations: voxel and point cloud. The voxel-based neural networks [1]–[3] can reconstruct 3D objects by generating voxelized three-dimensional occupancy grids. Voxel representation suffers from two problems: sparse information and high computational complexity, especially in high resolution 3D object processing. In order to make up for the deficiency of voxel expression, Fan et al [4] proposed point cloud-based 3D object reconstruction which is a deep learning method to study point cloud generation. The 3D point cloud of an object is composed of three-dimensional points uniformly sampled from the surface of the object. Point cloud model has scalability and flexibility, so we use point cloud as our 3D representation

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.