Abstract

Extreme learning machine ( ELM) is one of the most popular and important learning algorithms. It comes from single-hidden-layer feedforward neural networks. It has been proved that ELM can achieve better performance than support vector machine ( SVM) in regression and classification. In this paper, mathematically, with regression problem, the step 3 of ELM is studied. First of all, the equation H β = T are reformulated as an optimal model. With the optimality, the necessary conditions of optimal solution are presented. The equation H β = T is replaced by H T H β = H T T . We can prove that the latter must have one solution at least. Second, optimal approximation solution is discussed in cases of H is column full rank, row full rank, neither column nor row full rank. In the last case, the rank-1 and rank-2 methods are used to get optimal approximation solution. In theory, this paper present a better algorithm for ELM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call