Abstract

Learning-based approaches in 3D reconstruction problem have attracted researchers, due to the excellent performance of this approach in image segmentation and image classification. The increasing attention to the learning- based approach for the 3D reconstruction application is also due to the availability of 3D datasets shared publicly, such as ShapeNet and ModelNet datasets. Several deep learning approaches use voxel representation-based approaches. However, voxel-based methods suffer from inefficiency and inability to create higher dimensional 3D results. Another representation is by using point cloud representation, an unstructured 3D points in the object’s surface. However, learning such irregular structures is a challenging task due to the unordered properties of such representations. This paper proposes a new framework for 3D reconstruction of 2D images that introduces a 3D template-based point generation network. The 3D template-based point generation network infers a 3D template and generates 3D point clouds representing the reconstructed 3D object, based on an input image. The proposed network introduces two inputs, the encoded 2D image and the encoded 3D point template produced by an image classification module and a 3D template generation module. Experiments on the ShapeNet dataset show better performance than existing methods in terms of the Chamfer distance between the 3D ground-truth data and the 3D reconstructed data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call