Abstract

Incremental extreme learning machine has been verified that it has the universal approximation capability. However, there are two major issues lowering its efficiency: one is that some “random” hidden nodes are inefficient which decrease the convergence rate and increase the structural complexity, the other is that the final output weight vector is not the minimum norm least-squares solution which decreases the generalization capability. To settle these issues, this paper proposes a simple and efficient algorithm in which the parameters of even hidden nodes are calculated by fitting the residual error vector in the previous phase, and then, all existing output weights are recursively updated based on inverse partitioned matrix. The algorithm can reduce the inefficient hidden nodes and obtain a preferable output weight vector which is always the minimum norm least-squares solution. Theoretical analyses and experimental results show that the proposed algorithm has better performance on convergence rate, generalization capability and structural complexity than other incremental extreme learning machine algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call