Abstract

This article presents a transformer and convolutional neural network (CNN) hybrid deep neural network for semantic segmentation of very high resolution (VHR) remote sensing imagery. The model follows an encoder–decoder structure. The encoder module uses a new universal backbone Swin transformer to extract features to achieve better long-range spatial dependencies modeling. The decoder module draws on some effective blocks and successful strategies of CNN-based models in remote sensing image segmentation. In the middle of the framework, an atrous spatial pyramid pooling block based on depthwise separable convolution (SASPP) is applied to obtain a multiscale context. A U-shaped decoder is used to gradually restore the size of the feature maps. Three skip connections are built between the encoder and decoder feature maps of the same size to maintain the transmission of local details and enhance the communication of multiscale features. A squeeze-and-excitation (SE) channel attention block is added before segmentation for feature augmentation. An auxiliary boundary detection branch is combined to provide edge constraints for semantic segmentation. Extensive ablation experiments were conducted on the International Society for Photogrammetry and Remote Sensing (ISPRS) Vaihingen and Potsdam benchmarks to test the effectiveness of multiple components of the network. At the same time, the proposed method is compared with the current state-of-the-art methods on the two benchmarks. The proposed hybrid network achieved the second highest overall accuracy (OA) on both the Potsdam and Vaihingen benchmarks (code and models are available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/zq7734509/mmsegmentation-</uri> multilayer).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call