Abstract

The automated detection of buildings in remote sensing images enables understanding the distribution information of buildings, which is indispensable for many geographic and social applications, such as urban planning, change monitoring and population estimation. The performance of deep learning in images often depends on a large number of manually labeled samples, the production of which is time-consuming and expensive. Thus, this study focuses on reducing the number of labeled samples used and proposing a semi-supervised deep learning approach based on an edge detection network (SDLED), which is the first to introduce semi-supervised learning to the edge detection neural network for extracting building roof boundaries from high-resolution remote sensing images. This approach uses a small number of labeled samples and abundant unlabeled images for joint training. An expert-level semantic edge segmentation model is trained based on labeled samples, which guides unlabeled images to generate pseudo-labels automatically. The inaccurate label sets and manually labeled samples are used to update the semantic edge model together. Particularly, we modified the semantic segmentation network D-LinkNet to obtain high-quality pseudo-labels. Specifically, the main network architecture of D-LinkNet is retained while the multi-scale fusion is added in its second half to improve its performance on edge detection. The SDLED was tested on high-spatial-resolution remote sensing images taken from Google Earth. Results show that the SDLED performs better than the fully supervised method. Moreover, when the trained models were used to predict buildings in the neighboring counties, our approach was superior to the supervised way, with line IoU improvement of at least 6.47% and F1 score improvement of at least 7.49%.

Highlights

  • In recent years, machine learning and image processing techniques have been using remote sensing images to mine abundant information for urban planning, change detection, disaster assessment and other fields

  • We will explore the performance of increasing the number of unlabeled samples

  • The semi-supervised method is effective for the edge detection model to use in improving accuracy when only a small number of manually labeled samples are accessible

Read more

Summary

Introduction

Machine learning and image processing techniques have been using remote sensing images to mine abundant information for urban planning, change detection, disaster assessment and other fields. Taking earthquake monitoring as an example, soft computing techniques complete seismic vulnerability assessments of existing buildings, which mitigate postquake effects [1]. Many approaches have been developed to complete the task of rapidly assessing damage [2,3,4], which plays an important role in rescue and recovery missions. Pixel-based change detection is widely used to accurately analyze the changes of destroyed houses [5]. These methods relying on computer technology are effective, which can extract building damage information in a very short time and greatly reduce labor costs.

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.