Abstract

Multi-view based 3D reconstruction aims to obtain 3D structure information of objects in space through two-dimensional images. In this paper, we propose a new multi-view stereo network that can robustly reconstruct the scene. To enhance the feature representation ability of Point-MVSNet, a pyramid attention module is introduced. Specifically, we exploit the attention mechanism for the multi-scale feature pyramid to capture larger receptive fields and richer information. Instead of constructing a feature pyramid as the input, results of the pyramid attention module at different scales are directly used for the next layer. The network eventually generates a high-quality depth estimation for 3D reconstruction from sparse to dense by an iterative refinement schedule. Experiments have been performed to evaluate 3D reconstruction quality by comparison with existing state-of-the-art methods on the DTU dataset. The experimental results indicate our method performs the best in overall quality compared with previous methods, proving the effectiveness of our method. In the end, we use the data collected by mobile devices to implement 3D reconstruction with a combination of traditional and learning-based methods, providing ideas for the 3D reconstruction technology on mobile devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call