Recent developments in deep learning technology have boosted the performance of dense stereo reconstruction. However, the state-of-the-art deep learning-based stereo matching methods are mainly trained using close-range synthetic images. Consequently, the application of these methods in aerial photogrammetry and remote sensing is currently far from straightforward. In this paper, we propose a new disparity estimation network for stereo matching and investigate its generalization abilities in regard to aerial images. First, we propose an end-to-end deep learning network for stereo matching, regularized by disparity gradients, which includes a residual cost volume and a reconstruction error volume in a refinement module, and multiple losses. In order to investigate the influence of the multiple losses, a comprehensive analysis is presented. Second, based on this network trained with synthetic close-range data, we propose a new pipeline for matching high-resolution aerial imagery. The experimental results show that the proposed network improves the disparity accuracy by up to 40% in terms of errors larger than 1 px compared to results when not including the refinement network, especially in areas containing detailed small objects. In addition, in qualitative and quantitative experiments, we are able to show that our model, pre-trained on a synthetic stereo dataset, achieves very competitive sub-pixel geometric accuracy on aerial images. These results confirm that the domain gap between synthetic close-range and real aerial images can be satisfactorily bridged using the proposed new deep learning method for dense image matching.
Read full abstract