Abstract
Semantic labeling of high-resolution remote sensing images(HRRSIs) has always been an important research field in remote sensing images analysis. However, remote sensing images contain substantial low-level features and high-level features, which makes them quite difficult to be recognized. In this letter, we proposed a multi-level feature fusion and attention network(MFANet) to adaptively capture and fuse mutil-level features in a more effective and efficient manner. Specifically, the backbone of our network is divided into two branches – the detail branch and the semantic branch, where the detail branch extracts low-level features and the semantic branch extracts high-level features. The Deep Atrous Spatial Pyramid(DASPP) module is embedded in the end of the semantic branch to capture multiscale features as a supplement to high-level features. It is worth noting that the feature alignment and fusion (FAF) module is used to align and fuse features from different stages to enhance feature representation. Furthermore, the context attention (CA) module is employed to process feature map from the two branches to establish contextual dependencies in the spatial dimension and channel dimension, which can help network focus on more meaningful features. The experiments are carried out on the ISPRS Vaihingen and Potsdam datasets, and the results show that our proposed method has achieved better performance than other state-of-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.