Abstract

Detecting changes using bitemporal remote sensing imagery is vital to understand the dynamics of the land surface. Existing change detection models based on deep learning suffer from the problem of scale variation and pseudochange due to their insufficient multilevel aggregation and inadequate capability of feature representation, which limits the accuracy. This study proposes a densely attentive refinement network (DARNet) to improve change detection on bitemporal very-high-resolution remote sensing images. DARNet is based on the U-shape encoder–decoder architecture with the Siamese network as a feature extractor. The dense skip connection module (DSCM) is employed between the decoder and the encoder to aggregate multilevel feature maps. The hybrid attention module (HAM) is integrated to exploit contextual information and generate discriminative features. The recurrent refinement module (RRM) is exploited to progressively refine the predicted change maps during the decoding process. Experiments on testing the model performance were conducted on three benchmark datasets: the season-varying change detection (SVCD) dataset, the Sun Yat-sen University change detection (SYSU-CD) dataset, and the Learning Vision and Remote Sensing Laboratory building change detection (LEVIR-CD) dataset. The experimental results demonstrate that DARNet outperforms state-of-the-art models with kappa of 96.58%, 75.35%, and 90.69% for the SVCD, SYSU-CD, and LEVIR-CD datasets, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call