Abstract

The goal of Multi-View Stereo is to reconstruct the 3D point cloud model from multiple views. With the development of deep learning, more and more learning-based research has achieved remarkable results. However, existing methods ignore the fine-grained features of the bottom layer, which leads to the poor quality of model reconstruction, especially in terms of completeness. Besides, current methods still rely on a large amount of consumed memory resources because of the application of 3D convolution. To this end, this paper proposes a Multiple Granularities Feature Fusion Network for Multi-View Stereo, an end-to-end depth estimation network combining global and local features, which is characterized by fine-granularity multi-feature fusion. Firstly, we propose a dense feature adaptive connection module, which can adaptively fuse the global and local features in the scene, provide a more complete and effective feature map for inferring a more detailed depth map, and make the ultimate model more complete. Secondly, in order to further improve the accuracy and completeness of the reconstructed point cloud, we introduce normal and edge loss futead of only using depth loss functions as in the existing methods, which makes the network more sensitive to small depth structures. Finally, we propose distributed 3D convolution instead of traditional 3D convolution, which reduces memory consumption. The experimental results on the DTU and Tanks & Temples datasets demonstrate that the proposed method in this papaer achieves the state-of-the-art performance, which proves the accuracy and effectiveness of the MG-MVSNet proposed in this paper.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.