Abstract

Extreme learning machines (ELMs), especially kernel ELMs (KELMs), have achieved great success in providing efficient and effective solutions to classification applications. This paper proposes a simple but effective expectation kernel ELM (EKELM) to improve the classification abilities of ELMs. EKELM is based on a new family of positive semidefinite (PSD) kernel functions, namely expectation kernels (EKs), to learn similarities between data samples by combining the advances of random feature mapping and conventional kernel functions. EK provides a new insight to model ELMs from the perspective of kernel approximation using random sampling techniques. Particularly, we show that the distribution of random sampling weights, i.e. the input weight of ELM, has deep influences on the classification performance, and we use Gaussian distributions to generate random weights in EKELM. Besides, the number of sampling samples, i.e. the number of hidden neurons in ELM, can be reduced by using a proper nonlinear kernel function. We test the proposed EKELM on 20 benchmark classification datasets, the results prove that using a Gaussian distribution to generate random sampling weights has better performance than uniform distribution. Also, the results show that nonlinear RBF EKELM achieves comparable classification performance compared with the conventional RBF kernel, and requires much less random samples compared with linear EKELM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call