Abstract

Convolutional neural networks (CNNs) have received significant attention for change detection (CD) on multimodal remote sensing images, but they struggle to capture global cues due to the locality of convolution operations. In contrast, the transformer can learn global semantic information by dividing the input image into patches, adding position encodings, and utilizing the self-attention mechanism. Motivated by this, we propose mSwinUNet, a novel end-to-end multi-modal model with swin-transformer-based and U-shaped siamese network architectures for supervised CD using Sentinel-1 Synthetic Aperture Radar (SAR) and Sentinel-2 Multispectral Imager (MSI) data. mSwinUNet contains multi-modal encoder with difference module, bottleneck, and fused decoder, and all of them are based on swin transformer. Firstly, tokenized multi-modal bitemporal image patches are fed into multiple Siamese encoder branches to extract multi-level multi-modal difference feature maps in parallel. Subsequently, the last level multi-modal difference maps are fused to generate the smallest scale change map in the bottleneck. Then, the hierarchical decoder incorporates patch expansion and fusion operations to fuse multi-scale difference and change maps, effectively recuperating the details of the change information. Finally, the last patch expansion and a linear projection are applied to output the final change map, which preserves the identical spatial resolution as the input image. Extensive experiments have shown that mSwinUNet outperforms several the state-of-the-art multi-modal CD methods on OSCD dataset and the corresponding Sentinel-1 SAR data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.