Abstract

Hyperspectral data usually consists of hundreds of narrow spectral bands and provides more detailed spectral characteristics compared to commonly used multispectral data in remote sensing applications. However, highly correlated spectral bands in hyperspectral data lead to computational complexity, which limits many applications or traditional methods when applied to hyperspectral data. The dimensionality reduction of hyperspectral data becomes one of the most important pre-processing steps in hyperspectral data analysis. Recently, deep reinforcement learning (DRL) has been introduced to hyperspectral data band selection (BS); however, the current DRL methods for hyperspectral data BS simply remove redundant bands, lack the significance analysis for the selected bands, and the reward mechanisms used in DRL only take basic forms in general. In this paper, a new reward mechanism strategy has been proposed, and Double Deep Q-Network (DDQN) is introduced during BS using DRL to improve the network stabilities and avoid local optimum. To verify the effect of the proposed BS method, land cover classification experiments were designed and carried out to analyze and compare the proposed method with other BS methods. In the land cover classification experiments, the overall accuracy (OA) of the proposed method can reach 98.37%, the average accuracy (AA) is 95.63%, the kappa coefficient (Kappa) is 97.87%. Overall, the proposed method is superior to other BS methods. Experiments have also shown that the proposed method works not only for airborne hyperspectral data (AVIRIS and HYDICE), but also for hyperspectral satellite data, such as PRISMA data. When hyperspectral data is applied to similar applications, the proposed BS method could be a candidate for the BS preprocessing options.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call