Abstract

Building extraction from remote sensing images is of great importance in urban planning. Yet it is a longstanding problem for many complicate factors such as various scales and complex backgrounds. This paper proposes a novel supervised building extraction method via deep deconvolution neural networks (DeconvNet). Our method consists of three steps. First, we preprocess the multi-source remote sensing images provided by the IEEE GRSS Data Fusion Contest. A high-quality Vancouver building dataset is created on pansharpened images whose ground-truth are obtained from the OpenStreetMap project. Then, we pretrain a deep deconvolution network on a public large-scale Massachusetts building dataset, which is further fine-tuned by two band combinations (RGB and NRG) of our dataset, respectively. Moreover, the output saliency maps of the fine-tuned models are fused to produce the final building extraction result. Extensive experiments on our Vancouver building dataset demonstrate the effectiveness and efficiency of the proposed method. To the best of our knowledge, it is the first work to use deconvolution networks for building extraction from remote sensing images.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.