Remote sensing image change detection is crucial for natural disaster monitoring and land use change. As the resolution increases, the scenes covered by remote sensing images become more complex, and traditional methods have difficulties in extracting detailed information. With the development of deep learning, the field of change detection has new opportunities. However, existing algorithms mainly focus on the difference analysis between bi-temporal images, while ignoring the semantic information between images, resulting in the global and local information not being able to interact effectively. In this paper, we introduce a new transformer-based multilevel attention network (MATNet), which is capable of extracting multilevel features of global and local information, enabling information interaction and fusion, and thus modeling the global context more effectively. Specifically, we extract multilevel semantic features through the Transformer encoder, and utilize the Feature Enhancement Module (FEM) to perform feature summing and differencing on the multilevel features in order to better extract the local detail information, and thus better detect the changes in small regions. In addition, we employ a multilevel attention decoder (MAD) to obtain information in spatial and spectral dimensions, which can effectively fuse global and local information. In experiments, our method performs excellently on CDD, DSIFN-CD, LEVIR-CD, and SYSU-CD datasets, with F1 scores and OA reaching 95.67%∕87.75%∕90.94%∕86.82% and 98.95%∕95.93%∕99.11%∕90.53% respectively.
Read full abstract