Abstract

Deep learning-based single view 3D reconstruction is a hot topic in computer vision. However, predicting a more realistic 3D point cloud from a single image is an ill-posed problem. In recent years, most of the 3D point cloud prediction methods based on single view are straight-through structure, which will cause the loss of part of feature information and the loss of part of detail information of the resulting point clouds, which will lead to the unsatisfactory visual effect of reconstructed point clouds. In this paper, a Feature-Enhanced 3D point clouds generation Network (3D-FENet) from a single image is proposed. In order to enhance the feature information of RGB image, edge extraction module is adopted. In the process of point cloud generation, a point cloud pyramid is designed, which combines low resolution point cloud with high resolution point cloud to enhance the local details of the generated point clouds. In the fine-tuning stage, the differential projection module is used to fine-tune the whole network by 2D projection of reconstructed point clouds. Experimental results show that the performance of the authors’ proposed method is better than the state-of-the-art studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call