Abstract

Extreme learning machine ( ELM) is one of the most popular and important learning algorithms. It comes from single-hidden-layer feedforward neural networks. It has been proved that ELM can achieve better performance than support vector machine ( SVM) in regression and classification. In this paper, mathematically, with regression problem, the step 3 of ELM is studied. First of all, the equation H β = T are reformulated as an optimal model. With the optimality, the necessary conditions of optimal solution are presented. The equation H β = T is replaced by H T H β = H T T . We can prove that the latter must have one solution at least. Second, optimal approximation solution is discussed in cases of H is column full rank, row full rank, neither column nor row full rank. In the last case, the rank-1 and rank-2 methods are used to get optimal approximation solution. In theory, this paper present a better algorithm for ELM.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.