Abstract

Joint spectral-spatial feature extraction has been proven to be the most effective part of hyperspectral image (HSI) classification. But, due to the mixing of informative and noisy bands in HSI, joint spectral-spatial feature extraction using convolutional neural network (CNN) may lead to information loss and high computational cost. More specifically, joint spectral-spatial feature extraction from excessive bands may cause loss of spectral information due to the involvement of convolution operation on non-informative spectral bands. Therefore, we propose a simple yet effective deep learning model, named deep hierarchical spectral-spatial feature fusion (DHSSFF), where spectral-spatial features are exploited separately to reduce the information loss and fuse the deep features to learn the semantic information. It makes use of abundant spectral bands and few informative bands of HSI for spectral and spatial feature extraction, respectively. The spectral and spatial features are extracted through 1D CNN and 3D CNN, respectively. To validate the effectiveness of our model, the experiments have been performed on five well-known HSI datasets. Experimental results demonstrate that the proposed method outperforms other state-of-the-art methods and achieved 99.17%, 98.84%, 98.70%, 99.18%, and 99.24% overall accuracy on Kennedy Space Center, Botswana, Indian Pines, University of Pavia, and Salinas datasets, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.