Abstract
The issues of feature redundancy and insufficient labeled samples impede the widespread application of hyperspectral images. Many researchers have designed complex networks in pursuit of hyperspectral image classification, which has brought unstable factors to the networks and resulted in poor performance in the case of insufficient labeled samples. The motivation of this paper is to promote the classification accuracy by enhancing the classification approach, instead of the feature extraction phase. Differing from the existing works which focus on elaborate and complicated network designs and complex training strategies, this work is aimed at integrating a vector-neuron capsule representation with a vanilla fully convolutional network (FCN) to obtain an improved hyperspectral image classification performance in the case of insufficient labeled samples. The proposed network, which is named the capsule-vectored neural network (CVNN), consists of an encoder–decoder feature learning part and a vector-neuron capsule transformation part. In the encoder–decoder part, a down-sampling step is employed twice in the fully convolutional framework. After feature extraction by the vanilla FCN, the output is transformed into a vectorial representation with a learnable transformation matrix. All of the vector neurons have a unified dimension, and the norm of the vector neurons is utilized as the logit that is fed into the softmax function to obtain the posterior probability matrix. The experimental results confirm that CVNN can tackle deep learning modeling well in the case of limited labeled samples. The classification accuracy of CVNN is also higher than that of the other advanced classification approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.