Abstract

ABSTRACT Monitoring the spatiotemporal distribution of urban impervious surface is an essential indication for measuring the urbanization process. Optical and synthetic aperture radar (SAR) images are key data sources in urban impervious surface extraction. Because cities are highly heterogeneous scenes, using a single data source to extract urban impervious surface encounters a bottleneck in accuracy enhancement due to the limitation of a single-modal feature representation, which can be helped to overcome by fusing the two data. However, existing researches have primarily done fusion directly by layer stacking for urban impervious surface, without taking into account the modal differences between optical and SAR (optical-SAR) images, and thus cannot better realize complementarity between the two. As a result, this study proposes a cross-modal multi-scale features fusion segmentation network (CMFFNet) for optical-SAR images for urban impervious surface extraction. A cross-modal features fusion (CMFF) module is designed in the proposed CMFFNet to fully exploit the complementary information of optical-SAR images. Additionally, we propose a multi-scale features fusion (MSFF) module to fuse multi-scale features of optical-SAR images, taking into account the multi-scale characteristics of urban impervious surface. The results of the experiments demonstrate that the proposed CMFFNet outperforms current mainstream methods for extracting impervious surface.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call