Abstract
This paper presents a hierarchical deep framework called Spectral-Spatial Response (SSR) to jointly learn spectral and spatial features of Hyperspectral Images (HSIs) by iteratively abstracting neighboringregions. SSRformsadeeparchitectureandisabletolearndiscriminativespectral-spatial features of the input HSI at different scales. It includes several existing spectral-spatial-based methods as special scenarios within a single unified framework. Based on SSR, we further propose the Subspace Learning-based Networks (SLN) as an example of SSR for HSI classification. In SLN, the joint spectral and spatial features are learned using templates simply learned by Marginal Fisher Analysis (MFA) and Principal Component Analysis (PCA). An important contribution to the success of SLN is the exploitation of label information of training samples and the local spatial structure of HSI. Extensive experimental results on four challenging HSI datasets taken from the Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) and Reflective Optics System Imaging Spectrometer (ROSIS) airborne sensors show the implementational simplicity of SLN and verify the superiority of SSR for HSI classification.
Highlights
Hyperspectral Image (HSI) classification has recently gained in popularity and attracted interest in many fields, including assessment of environmental damage, growth regulation, land use monitoring, urban planning, reconnaissance, etc. [1,2,3,4,5]
A main contribution of the presented framework is using hierarchical deep architecture to learn joint spectral-spatial features, which are suitable for the nature of HSI
It uses a new strategy to learn the spectral-spatial features of HSIs
Summary
Hyperspectral Image (HSI) classification has recently gained in popularity and attracted interest in many fields, including assessment of environmental damage, growth regulation, land use monitoring, urban planning, reconnaissance, etc. [1,2,3,4,5]. Kernel-based Extreme Learning Machine (KELM) [15,16] was applied to HSI classification [17], where KELM uses an idea to train a single-hidden layer feed forward neural network; that is, the hidden-node parameters are randomly generated based on certain probability distributions. This idea was originally proposed in [18] and further developed in [19,20]. Their classification results may contain noise, like salt-and-pepper [24]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have