Abstract

An optimized neural network classification method based on kernel holistic learning and division (KHLD) is presented. The proposed method is based on the learned radial basis function (RBF) kernel as the research object. The kernel proposed here can be considered a subspace region consisting of the same pattern category in the training sample space. By extending the region of the sample space of the original instances, relevant information between instances can be obtained from the subspace, and the classifier’s boundary can be far from the original instances; thus, the robustness and generalization performance of the classifier are enhanced. In concrete implementation, a new pattern vector is generated within each RBF kernel according to the instance optimization and screening method to characterize KHLD. Experiments on artificial datasets and several UCI benchmark datasets show the effectiveness of our method.

Highlights

  • In the field of pattern recognition, set classification [1,2,3] is a common classification task

  • The performance of kernel holistic learning and division (KHLD) is evaluated with two artificial datasets: Double Moon (DM) [25] and Concrete Circle (CC); 8 UCI benchmark datasets [26]: Blood, Climate, Heart Disease (HD), Sonar, SPECT Heart (SH), Image Segmentation (IS), Forest, and Wilt; and 1 LIBSVM benchmark dataset [27]

  • E performance of KHLD is combined with existing classification algorithms and compared with these algorithms, including SVM [27], ELM [24], HSARBF-ELM, a constrained optimization method based on BP neural network (CO-BP) [28], and an optimized radial basis function (RBF) network based on fractional order gradient descent with momentum (FOGDM-RBF) [29]

Read more

Summary

Introduction

In the field of pattern recognition, set classification [1,2,3] is a common classification task. It is widely applied in text classification, speech recognition, image recognition, and multiple other fields. Due to the use of relevant information from adjacent frames, image changes can be effectively explored in the actual conditions. Different from the set classification methods mentioned above, almost all the current neural network [4,5,6,7] optimization algorithms and models are based on the training and classification of instances instead of learning and partitioning the subspace region containing those instances. Because the classification surface of the network classifier is essentially determined by the probability distribution of the training samples, if the size of the training sample set is too small or the dimension of the classified dataset is too high, the error in the final classification will be relatively large, which leads to the reduction in the generalization performance of the neural network classifier

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call