Abstract

At present, convolution neural networks have achieved good performance in remote sensing image change detection. However, due to the locality of convolution, these methods are difficult to capture the global context relationships among different-level features. To alleviate this issue, we propose a context and difference enhancement network (CDENet) for change detection, which can strongly model global context relationships and enhance the change difference. Specifically, our backbone is the dual TransUNet, which is based on U-Net and equipped with transformer block in the encoder. The dual TransUNet is used to extract bi-temporal features. Then, the features are encoded as the input sequence, which is conducive to modeling the global context. Moreover, we design the content difference enhancement module to process the dual features of each layer in the encoder. The designed module can increase the spatial attention of difference regions to enhance the change difference features. In the decoder, we adopt a simple cross-layer feature fusion to combine the upsampled features with the high-resolution features, which is used to generate more accurate results. Finally, we adopt a novel loss to supervise the accuracy of results in regions and pixels. The experiments on two public change detection datasets demonstrate that our CDENet has strong competitiveness and performs better than the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.