Abstract

ABSTRACT Deep learning has revolutionized change detection (CD) in remote sensing tasks. However, high-resolution CD based on deep learning faces challenges in handling semantic complexity in bi-temporal images, especially across varying weather and lighting conditions. While CNN-based approaches optimize network structures to enhance contextual information, recent attention-based methods increase computational demands. This study introduces the Channel-Spatial Attention Network (CSANet) to extract multi-scale and semantic information from images. Evaluated on LEVIR-CD and DSIFN-CD datasets, CSANet outperforms several state-of-the-art methods, demonstrating its potential for advanced change detection in high-resolution remote sensing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call