Abstract

In this study, a classification algorithm based on complex number feature is proposed. Specifically, the SVM framework is reformulated, so each example would be classified in the unitary space. The cost function is redefined by considering the maximum margin of real and imaginary units of the complex number feature at the same time. The cost function is based on the expectation of the hinge loss, and its derivatives can be calculated in closed forms. Using a stochastic gradient descent (SGD) algorithm, this method allows for efficient implementation. For complex number feature, the example uncertainty is modeled by a sample preprocessing method based on within-class Euclidean distance Gaussian distribution sample (DGS). In addition, a complex number feature selection method based on improved hybrid discrimination analysis (HDA) is proposed by considering the correlation between real and imaginary units of complex number feature. The proposed classification algorithm is tested on synthetic data and three publicly available and popular datasets, namely, MNIST, WDBC, and Voc2012. Experimental results verify the effectiveness of the proposed method. The codes are available: https://github.com/luckysomebody/paper-code.

Highlights

  • In comparison with deep learning methods [1], [2], SVM takes advantage of requiring a small amount of training samples [3]

  • The main innovations of this paper are as follows: Firstly, the hinge cost function is redefined by considering the maximum margin of real and imaginary units of complex number feature

  • The results show that the performance of the proposed method is better than other feature selection method in recall, precision, and F1 index

Read more

Summary

Introduction

In comparison with deep learning methods [1], [2], SVM takes advantage of requiring a small amount of training samples [3]. Many scholars continue to improve its baseline and classification accuracy, for example, X. Zhang proposed an improved multiple birth support vector machine [4], F. Deng combined SVM and error-correcting output coding to create ECOC-SVM [5], Y. Li proposed an improved SVM based binary tree [6], Cuckoo search algorithm was used to optimize kernel function and penalty factor of SVM to improve its forecast accuracy [7], and J. Xi introduced a Markov resampling based ISVM (MR-ISVM), to improve training time and accuracy [8]. YT proposed the online incremental and decremented learning

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.