Understanding surface changes requires the ability to identify changes in high resolution remote sensing images. Because current deep learning-based change detection algorithms are not able to accurately discriminate between altered and unmodified areas, which leads to the problem of edge uncertainty and small target missing in the detection process. To identify changes in high resolution remote sensing images, this research proposes an unique Attention-Guided Siamese Network (SAGNet). In this network, bitemporal images’ highly representative deep semantic features are retrieved using a fully convolutional dual-stream architecture, and the extracted deep semantic features are then used to extract semantic variation data from the Global Semantic Aggregation Module (GSAM). In the feature decoding stage, the extracted features are refined layer by layer through the Attention Fusion Module (AFM) for change map reconstruction. In addition, we propose two other auxiliary modules: Cross-scale Fusion Module (CFM) and Bilateral Feature Fusion Module (BFFM), which enable the network to remove background noise while improving the recognition accuracy of changing object boundaries and small-changing targets in the output change map. A public dataset called LEVIR-CD and a challenging dataset called BICD made up of bitemporal images from Google Earth covering various parts of China are used to experimentally test SAGNet. Finally, experimental evidence shows that our approach outperforms current cutting-edge change detection techniques.
Read full abstract