Abstract

3D object reconstruction from arbitrary view intensity images is a challenging but meaningful research topic in computer vision. The main limitations of existing approaches are that they lack complete and efficient prior information and might not be able to deal with serious occlusion or partial observation of 3D objects, which may produce incomplete and unreliable reconstructions. To reconstruct structure and recover missing or unseen parts of objects, category prior and intrinsic geometry relation are particularly useful and necessary during the 3D reconstruction process. In this paper, we propose Category-and-Intrinsic-Geometry Guided Network (CIGNet) for 3D coarse-to-fine reconstruction from arbitrary view intensity images by leveraging category prior and intrinsic geometry relation. CIGNet combines a category prior guided reconstruction module with an intrinsic geometry relation guided refinement module. In the first reconstruction module, we leverage semantic class context by adding a supervision term over object categories to output coarse reconstructed results. In the second refinement module, we model the coarse 3D volumetric data as 2D slices and consider intrinsic geometry relations between them to design graph structures of coarse 3D volumes to finish the graph-based refinement. CIGNet can accomplish high-quality 3D reconstruction tasks by exploring the intra-category characteristics of objects as well as the intrinsic geometry relations of each object, both of which serve as useful complements to the visual information of images, in a coarse-to-fine fashion. Extensive quantitative and qualitative experiments on a synthetic dataset ShapeNet and real-world datasets Pix3D, Statue Model Repository, and BlendedMVS indicate that CIGNet outperforms several state-of-the-art methods in terms of accuracy and detail recovery.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call