Abstract

Owing to their powerful feature extraction capabilities, deep learning-based methods have achieved significant progress in hyperspectral remote sensing classification. However, several issues still exist in these methods, including a lack of hyperspectral datasets for specific complicated scenarios and the need to improve the classification accuracy of land cover with limited samples. Thus, to highlight and distinguish effective features, we propose a hyperspectral classification framework based on a joint channel-space attention mechanism and generative adversarial network (JAGAN). To relearn feature-based weights, a higher priority was assigned to important features, which was developed by integrating a two-joint channel-space attention model to obtain the most valuable feature via the attention weight map. Additionally, two classifiers were designed in JAGAN: sigmoid was used to determine whether the input data were real or fake samples produced by the generator, while Softmax was adopted as a land cover classifier to yield the prediction type labels of the input samples. To test the classification performance of the JAGAN model, we used a self-constructed complex land cover dataset based on GaoFen-5 AHSI images, which consists of mixed landscapes of mining and agricultural areas from the urban-rural fringe. Compared with other methods, the proposed model achieved the highest overall classification accuracy of 86.09%, the highest kappa amount of 79.41%, the highest F1 score of 85.86%, and the highest average accuracy of 82.30%, indicating the JAGAN can effectively improve the classification accuracy for limited samples in complex regional environments using GF-5 AHSI images.

Highlights

  • L AND cover information is essential for a variety of geospatial applications, such as urban planning, regional administration, and environmental management [1]

  • In complicated environments with substantial amounts of data and spatial structures resulting from multiple bands, the automatic classification of land cover using hyperspectral remote sensing images remains a challenging task owing to the number of details on surface elements, complex spectral characteristics of surface objects, high dimensionality of the spectral bands, and limited training samples [12]-[16].In the early stages of hyperspectral image classification research, most methods aim to utilize its spectral features during classification [17], including the K-nearest neighbor (KNN) [18], spectral angle [19], extreme learning machine (ELM) [20], and support vector machine (SVM) [21], [22]

  • As popular hyperspectral classifiers based on deep learning, 2Dand 3D-convolutional neural networks (CNNs) enable the comprehensive exploitation of spatial and spectral features. 3D-generative adversarial networks (GANs) serves as a benchmark for comparisons with adversarial generation networks

Read more

Summary

Introduction

L AND cover information is essential for a variety of geospatial applications, such as urban planning, regional administration, and environmental management [1]. In complicated environments with substantial amounts of data and spatial structures resulting from multiple bands, the automatic classification of land cover using hyperspectral remote sensing images remains a challenging task owing to the number of details on surface elements, complex spectral characteristics of surface objects, high dimensionality of the spectral bands, and limited training samples [12]-[16].In the early stages of hyperspectral image classification research, most methods aim to utilize its spectral features during classification [17], including the K-nearest neighbor (KNN) [18], spectral angle [19], extreme learning machine (ELM) [20], and support vector machine (SVM) [21], [22] These methods ignore inter-pixel spatial information [23], which limits any improvements to the classification accuracy. AEs deliver limited effects for improving the accuracy of hyperspectral image classification because they yield a compressed feature

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call