Abstract

Deep neural network (DNN) methods play an essential role in hyperspectral classification. However, the massive parameters and vast computing overhead of DNN needs to be reduced when facing the deployment with limited storage and computing resources for real-time response applications, especially considering the high dimensionality of hyperspectral image. So, applying dimension reduction (DR) methods is a crucial pre-processing method in various studies. Still, most of them ignore the feature restoration after the data transformation by DR. In neural networks, many works still involve sophisticated skip connections and dense feature reuse, which can lead to feature redundancy and increase computational complexity, especially when DR methods have been applied first. Motivated by these issues, an efficient joint framework assisted by embedded feature smoother (FS) and sparse skip connection (SSC) is proposed in this article. Instead of directly feeding DR data into the subsequent network, we embedded a computing-cheap FS based on isotropic total variation to restore and enhance the spatial features. Furthermore, we proposed a SSC 3D convolution neural network to complete spatial–spectral feature representation and classification. The SSC is embodied in the design of log2n-skip connection to concatenate feature maps instead of dense connection, pruning the number of channels and reducing the model parameters. Experimental results show that the embedded FS significantly improves classification accuracy and is superior to other processing methods. Our framework offers much superior to other state-of-the-art deep learning-based methods considering classification performance and lightweight aspects, especially when using few training samples. Moreover, considering detailed processing steps, our framework has a competitively cheaper time consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call