Abstract

Aiming at the small sample with high-feature dimension and few numbers will cause a serious problem if simply using the traditional Elman neural network to deal with the small sample; these problems include poor learning ability, the redundancy structure, and incomplete training; these defects will result in lower operating efficiency and poor recognition precision. In this paper, combining the theory of partial least squares (PLS) and genetic algorithm (GA), as well as the nature of Elman neural network, and an optimized Elman neural network classification algorithm based on PLS and GA (PLS-GA-Elman) is established. The new algorithm reduces the feature dimension of small samples by PLS, the relatively ideal low-dimensional data is obtained, and the purpose reduces the neural network’s inputs and simplifies its structure. Using GA to optimize the connection weights, threshold values, and the number of hidden neurons and adopting the optimized way of encoding respectively and evolving simultaneously can improve the neural network incomplete training condition, leading to fewer number of samples and improvement in training speed and generalization ability; this ensures the optimal Elman neural network algorithm. A new algorithm based on twice consecutive optimization was the basis for a precise classification model. The results of experimental analysis illustrate that operating efficiency and classification precision of the new algorithm have been improved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call