Abstract

In this paper, a Capsulenet-based framework is proposed for extracting spectral and spatial features for improving hyperspectral image classification. Unlike conventional strategies, the proposed framework simultaneously optimizes both feature extraction and classification. The spectral features/patterns derived at different levels of hierarchies are remodeled as spectral-feature capsules. Consequently, unlike conventional convolutional neural network-based approaches, the relative locations as well as other properties such as depth, width, and position of the spectral patterns are taken into consideration. In addition to learning spectral features/patterns, a convolutional long short-term memory (conv-LSTM) is employed for sequentially integrating the spatial features learned from each band. The integrated spatial-feature representation, thus obtained from the final hidden state of conv-LSTM, forms spatial-feature capsules. The capsule-level integration of spatial and spectral features/patterns yields better convergence and accuracy as compared to both ensemble-based and kernel-level integrations. Along with the margin loss, a spectral-angle-based reconstruction loss is also minimized to regularize the learning of network weights. Experiments over different standard datasets indicate that the proposed approach performs better than other prominent hyperspectral classifiers. Furthermore, in comparison with the recent deep learning models, our approach is found to be less sensitive to the network parameters and achieves better accuracy even with lesser network depth.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.