Abstract

ABSTRACT In recent years, numerous hyperspectral image classification methods based on convolutional neural network (CNN) have been proposed in recent years. However, the majority of CNN-based classification models fail to take into account the semantic gap between shallow and deep features, and instead, just sum up or splice these two types of features, which leads to subpar feature fusion. To address the above problems, the paper proposes a hyperspectral image classification method based on Narrowing Semantic Gap Convolutional Neural Network (NSGCNN). We developed the feature fusion sub-network, which consists of Feature Semantic Enhancement Module (FSEM), Spatial Detail Supplement Module (SDCM), and Cross-layer Feature Fusion Module (CFFM). FSEM extracts rich global semantic information based on multi-scale dilated convolution. Reverse attention mechanism is introduced in the SDCM to complement deep-layer features with fine spatial features like edges and textures. The fusing of hierarchical features is then achieved using CFFM. Additionally, we can regulate the model’s parameter count by regulating the number of output feature maps. Experimental results on three benchmark hyperspectral datasets demonstrate that NSGCNN outperforms other state-of-the-art methods in terms of classification performance and significantly outperforms them in terms of model complexity.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.