ABSTRACT Accurate extraction of built-up areas is helpful to urban development and map updating. Nighttime light (NTL) data can capture the lighting signal of ground objects. However, most built-up area extraction is conducted on public limited coarse spatial resolution NTL images. The Sustainable Development Science Satellite-1 (SDGSAT-1) provides 10 m spatial resolution panchromatic NTL images, making it possible to map detailed urban lighting structures. In urban extraction, the boundaries of urban areas are easily confused with background objects due to the similar spectral and textual features. To address this problem, we proposed a multi-task deep learning model, CG-CFPANet, to extract illuminated built-up areas by synthesizing SDGSAT-1 NTL data and optical remote sensing images. In CG-CFPANet, a convolutional feature pyramid attention (CFPA) module for better contextual recognition and a concatenation group (CG) module to merge the two remote sensing images are developed. Our proposed CG-CFPANet achieved 1.3% higher precision in built-up area extraction than ten other recently proposed network structures: UNet, UNet++, PSPNet, DeeplabV3, FCN, ExtremeC3Net, SegNet, BiseNet, Res2-UNet, and CBRNet. It shows higher applicability for large-scale built-up area extraction.
Read full abstract