Abstract

Since hyperspectral images contain a variety of ground objects of different scales, long-distance ground objects can fully extract the global spatial information of the image. However, most existing methods struggle to capture multi-scale information and global features simultaneously. Therefore, we combine two algorithms, MCNN and LSTM, and propose the MCNN–LSTM algorithm. The MCNN–LSTM model first performs multiple convolution operations on the image, and the result of each pooling layer is subjected to a feature fusion of the fully connected layer. Then, the results of fully connected layers at multiple scales and an attention mechanism are fused to alleviate the information redundancy of the network. Next, the results obtained by the fully connected layer are fed into the LSTM neural network, which enables the global information of the image to be captured more efficiently. In addition, to make the model meet the expected standard, a layer of loop control module is added to the fully connected layer of the LSTM network to share the weight information of multiple pieces of training. Finally, multiple public datasets are adopted for testing. The experimental results demonstrate that the proposed MCNN–LSTM model effectively extracts multi-scale features and global information of hyperspectral images, thus achieving higher classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call