Abstract

ABSTRACT Building extraction is a fundamental area of research in the field of remote sensing. In this paper, we propose an efficient model called residual U-Net (RU-Net) to extract buildings. It combines the advantages of U-Net, residual learning, atrous spatial pyramid pooling, and focal loss. The U-Net model, based on modified residual learning, can reduce the parameters and degradation of the network; atrous spatial pyramid pooling can acquire multiscale features and context information of the sensing images; and focal loss can solve the problem of unbalanced categories in classification. We implemented it on the WHU aerial image dataset and the Inria aerial image labeling dataset. The results of RU-Net were compared with the results of U-Net, FastFCN, DeepLabV3+, Web-Net, and SegNet. Experimental results show that the proposed RU-Net is superior to the others in all aspects of the WHU dataset. For the Inria dataset, most evaluation metrics of RU-Net are better than the others, as well as the sharp, boundary, and multiscale information. Compared with FastFCN and DeepLabV3+, our method increases the efficiency by three to four times.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call