Abstract

Although deep networks based methods outperform traditional 3D reconstruction methods which require multiocular images or class labels to recover the full 3D geometry, they may produce incomplete recovery and unfaithful reconstruction when facing occluded parts of 3D objects. To address these issues, we propose Depth-preserving Latent Generative Adversarial Network (DLGAN) which consists of 3D Encoder-Decoder based GAN (EDGAN, serving as a generator and a discriminator) and Extreme Learning Machine (ELM, serving as a classifier) for 3D reconstruction from a monocular depth image of an object. Firstly, EDGAN decodes a latent vector from the 2.5D voxel grid representation of an input image, and generates the initial 3D occupancy grid under common GAN losses, a latent vector loss and a depth loss. For the latent vector loss, we design 3D deep AutoEncoder (AE) to learn a target latent vector from ground truth 3D voxel grid and utilize the vector to penalize the latent vector encoded from the input 2.5D data. For the depth loss, we utilize the input 2.5D data to penalize the initial 3D voxel grid from 2.5D views. Afterwards, ELM transforms float values of the initial 3D voxel grid to binary values under a binary reconstruction loss. Experimental results show that DLGAN not only outperforms several state-of-the-art methods by a large margin on both a synthetic dataset and a real-world dataset, but also predicts more occluded parts of 3D objects accurately without class labels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call