Abstract

Urban land-cover information is essential for resource allocation and sustainable urban development. Recently, deep learning algorithms have shown promising results in land-cover mapping with high spatial resolution (HSR) imagery. However, the limitation of the annotation and the divergence of the multi-sensor images always challenge the transferability of deep learning, thus hindering city-level or national-level mapping. In this paper, we propose a scheme to leverage small-scale airborne images with labels (source) for unlabeled large-scale spaceborne image (target) classification. Considering the sensor characteristics, a Cross-Sensor Land-cOVEr framework, called LoveCS, is introduced to address the difficulties of the spatial resolution inconsistency and spectral differences. As for the structural design, cross-sensor normalization is proposed to automatically learn sensor-specific normalization weights, thereby narrowing the spectral differences hierarchically. Furthermore, a dense multi-scale decoder is proposed to effectively fuse the multi-scale features from different sensors. As for the model optimization, self-training domain adaptation is adopted, and multi-scale pseudo-labeling is proposed to reduce the scale divergence brought by the spatial resolution inconsistency. The effectiveness of LoveCS was tested on data from the three cities of Nanjing, Changzhou, and Wuhan in China. The comprehensive results all show that LoveCS is superior to the existing domain adaptation methods in cross-sensor tasks, and has good generalizability. Compared with the existing land-cover products, the obtained results have the highest accuracy and spatial resolution (1.0m). Overall, LoveCS provides a new perspective for large-scale land-cover mapping based on limited HSR images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call