Abstract

ABSTRACT Extracting buildings from high spatial resolution remote sensing imagery automatically is considered as an important task in many applications. The huge differences in the appearance and spatial distribution of man-made buildings make it a challenging issue. In recent years, convolutional neural networks (CNNs) have made remarkable progress in computer vision. Many published papers have applied deep CNNs to remote sensing successfully. However, most contributions require complex structure and a big number of parameters which lead to redundant computations, and limit the application of the models. To address these issues, we propose a deep residual learning serial segmentation network called SSNet, an end-to-end semantic segmentation network, to extract buildings from high spatial resolution remote sensing imagery. SSNet reduces the network complexity and computations by drawing on the advantages of U-Net and ResNet, and improves the detection accuracy. The SSNet is extensively evaluated on two large remote sensing datasets covering a wide range of urban settlement appearances. The comparison of SSNet and state-of-the-art algorithms demonstrates the effectiveness and superiority of the proposed model for building extraction.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.