Abstract
ABSTRACT Since DCNNs (deep convolutional neural networks) have been successfully applied to various academic and industrial fields, semantic segmentation methods, based on DCNNs, are increasingly explored for remote-sensing image interpreting and information extracting. It is still highly challenging due to the presence of irregular target shapes, and similarities of inter – and intra-class objects in large-scale high-resolution satellite images. A majority of existing methods fuse the multi-scale features that always fail to provide satisfactory results. In this paper, a dual attention deep fusion semantic segmentation network of large-scale satellite remote-sensing images is proposed (DASSN_RSI). The framework consists of novel encoder-decoder architecture, and a weight-adaptive loss function based on focal loss. To refine high-level semantic and low-level spatial feature maps, the deep layer channel attention module (DLCAM) and shallow layer spatial attention module (SLSAM) are designed and appended with specific blocks. Then the DUpsampling is incorporated to fuse feature maps in a lossless way. Peculiarly, the weight-adaptive focal loss (W-AFL) is inferred and embedded successfully, alleviating the class-imbalanced issue as much as possible. The extensive experiments are conducted on Gaofen image dataset (GID) datasets (Gaofen-2 satellite images, coarse set with five categories and refined set with fifteen categories). And the results show that our approach achieves state-of-the-art performance compared to other typical variants of encoder-decoder networks in the numerical evaluation and visual inspection. Besides, the necessary ablation studies are carried out for a comprehensive evaluation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.