Abstract

The presence of fewer samples and large number of input features increases the complexity of the classifier and degrades the stability. Thus, dimension reduction was always carried before supervised learning algorithms such as neural network. This two-stage framework is somewhat redundant in dimension reduction and network training. This paper proposes a novel one-stage learning algorithm for high-dimension small-sample data, called Projection Vector Machine (PVM), which combines dimension reduction with network training and removes the redundancy. Through dimension reduction operation such as singular vector decomposition (SVD), we not only reduce the dimension but also obtain the size of single-hidden layer feedforward neural network (SLFN) and input weight values simultaneously. This size-fixed network will become linear programming system and thus the output weights can be determined by simple least square method. Unlike traditional backpropagation feedforward neural network (BP), parameters in PVM don't need iterative tuning and thus its training speed is much faster than BP. Unlike extreme learning machine (ELM) proposed by Huang [G.-B. Huang, Q.-Y. Zhu, C.-K. Siew, Extreme learning machine: theory and applications, Neurocomputing 70 (2006) 489–501] which assigns input weights randomly, PVM's input weights are ranked by singular values and select the optimal weights order by singular value. We give proof that PVM is a universal approximator for high-dimension small-sample data. Experimental results show that the proposed one-stage algorithm PVM is faster than two-stage learning approach such as SVD+BP and SVD+ELM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call