Abstract

Under complex illumination conditions, the spectral data distributions of a given material appear inconsistent in the hyperspectral images of the space target, making it difficult to achieve accurate material identification using only spectral features and local spatial features. Aiming at this problem, a material identification method based on an improved graph convolutional neural network is proposed. Superpixel segmentation is conducted on the hyperspectral images to build the multiscale joint topological graph of the space target global structure. Based on this, topological graphs containing the global spatial features and spectral features of each pixel are generated, and the pixel neighborhoods containing the local spatial features and spectral features are collected to form material identification datasets that include both of these. Then, the graph convolutional neural network (GCN) and the three-dimensional convolutional neural network (3-D CNN) are combined into one model using strategies of addition, element-wise multiplication, or concatenation, and the model is trained by the datasets to fuse and learn the three features. For the simulated data and the measured data, the overall accuracy of the proposed method can be kept at 85–90%, and their kappa coefficients remain around 0.8. This proves that the proposed method can improve the material identification performance under complex illumination conditions with high accuracy and strong robustness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call