Abstract

In the traditional remote sensing image recognition, the traditional features (e.g., color features and texture features) cannot fully describe complex images, and the relationships between image pixels cannot be captured well. Using a single model or a traditional sequential joint model, it is easy to lose deep features during feature mining. This article proposes a new feature extraction method that uses the word embedding method from natural language processing to generate bidirectional real dense vectors to reflect the contextual relationships between the pixels. A bidirectional independent recurrent neural network (BiIndRNN) is combined with a convolutional neural network (CNN) to improve the sliced recurrent neural network (SRNN) algorithm model, which is then constructed in parallel with graph convolutional networks (GCNs) under an attention mechanism to fully exploit the deep features of images and to capture the semantic information of the context. This model is collectively named an improved SRNN and attention-treated GCN-based parallel (SAGP) model. Experiments conducted on Populus euphratica forests demonstrate that the proposed method outperforms traditional methods in terms of recognition accuracy. The validation done on public data set also proved it.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.