Abstract

For semantic segmentation of remote sensing images, convolutional neural networks (CNNs) have proven to be powerful tools. However, the existing CNNs-based methods have the problems of feature information loss, serious interference by clutter information, and ignoring the correlation between different scale features. To solve these problems, this article proposes a novel hidden feature-guided semantic segmentation network (HFGNet) for remote sensing images, which achieves accurate semantic segmentation by hierarchically extracting and fusing valuable feature information. Specifically, the hidden feature extraction module (HFE-M) is introduced to suppress the salient feature representation to mine more valuable hidden features. Meanwhile, the multi-feature interactive fusion module (MIF-M) establishes the correlation between different features to achieve hierarchical feature fusion. The multi-scale feature calibration module (MSFC) is constructed to enhance the diversity and refinement representation of hierarchical fusion features. Besides, the local-channel attention mechanism (LCA-M) is designed to improve the feature perception capability of the object region and suppress background information interference. We conducted extensive experiments on the widely-used ISPRS 2-D Semantic Labeling dataset and the 15-Class Gaofen Image dataset. Experimental results demonstrate that the proposed HFGNet has advantages over several state-of-the-art methods. The source code and models are available at https://github.com/darkseid-arch/RS-HFGNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call