Abstract

Extreme learning machine is known for its fast learning speed while maintaining acceptable generalisation. Its learning process can be divided into two parts: (1) randomly assigns input weights and biases in hidden layer, and (2) analytically determines output weights by the use of Moore-Penrose generalised inverse. Through the analysis from theory and experiment aspects we point out that it is the random weights assignment rather than the analytical determination with generalised inverse that leads to its fast training speed. In fact, the calculation of generalised inverse of hidden layer output matrix based on singular value decomposition (SVD) has very low efficiency especially on large scale data, and even directly cannot work. Considering this high calculation complexity reduces the learning speed of ELM conjugate gradient is introduced as a replacement of Moore-Penrose generalised inverse and conjugate gradient based ELM (CG-ELM) is proposed. Numerical simulations show that, in most cases, CG-ELM achieved faster speed than ELM in the condition of maintaining similar generalisation. Even in the case that ELM cannot work because of the huge amount of data CG-ELM attains good performance, which illustrates that Moore-Penrose generalised inverse is not the contribution of fast learning speed of ELM from experiment view.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.