Abstract

AbstractFor semantic segmentation of high-resolution remote-sensing images, digital surface models (DSMs) information is useful for improving the accuracy and robustness of the segmentation models. However, since the feature distributions of spectral and DSM images vary significantly in different scenes, it is difficult to fuse them effectively in popular deep network models. To solve this issue, we propose an attention-based DSM fusion network (ADF-Net) for high-resolution remote-sensing image semantic segmentation. The proposed network makes two contributions. The first is that we design an attention-based feature fusion module, which can selectively gather features from spectral and DSM information by channel attention mechanism, and further combine them to get high-quality fusion features. The second is that we introduce a residual feature refinement module to reduce the redundant information from skip connection adaptively. We evaluate the proposed network on the ISPRS Vaihingen and Potsdam datasets, experimental results demonstrate that our model outperforms state-of-the-art methods.KeywordsHigh-resolution remote-sensing imagesAttention mechanismSemantic segmentationData fusion

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.