Abstract

In this paper, we propose a practical three-dimensional (3D) real-scene reconstruction framework named Deep3D, which is paired with a deep learning based multi-view stereo (MVS) matching model named the adaptive multi-view aggregation matching (Ada-MVS) model, to obtain a 3D textured mesh model from multi-view oblique aerial images. Deep3D is the first deep learning based framework for 3D scene reconstruction, in which aerial triangulation and view selection are first performed on the input images, and the depth map of each image is then inferred using the pretrained Ada-MVS model. All the inferred depth maps are then fused into a dense point cloud after filtering the outliers. Finally, the 3D textured mesh is extracted from the dense 3D points as the final product. In the Ada-MVS model, a novel adaptive inter-view aggregation module is specially proposed to address the inconsistent information among oblique views and to fuse the multi-view costs into a robust cost volume. A lightweight recurrent regularization module is also designed for high-efficiency processing of high-capacity aerial images with large depth variations. Moreover, as oblique aerial image datasets are currently lacking, we built a large-scale synthetic multi-view oblique aerial image dataset (WHU-OMVS dataset) for deep learning based model training and methodology evaluation for the task of 3D scene reconstruction. The experimental results show that, firstly, the proposed Ada-MVS model has obvious advantages when used with high-capacity oblique aerial images, compared with several relevant learning-based MVS methods. Secondly, through a comprehensive comparison with popular commercial software packages and open-source solutions, it is shown that the proposed Deep3D framework outperforms all the other solutions in terms of reconstruction quality, and outperforms all the open-source solutions and some of the software packages in terms of efficiency on the WHU-OMVS dataset. Thirdly, the Deep3D framework shows a stable generalization ability and excellent performance when applied to other oblique or nadir aerial images, without any further fine-tuning. The dataset and code will be available at http://gpcv.whu.edu.cn/data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call