Abstract

Extreme learning machine (ELM), as a new learning framework, draws increasing attractions in the areas of large-scale computing, high-speed signal processing, artificial intelligence, and so on. ELM aims to break the barriers between the conventional artificial learning techniques and biological learning mechanism and represents a suite of machine learning techniques in which hidden neurons need not to be tuned. ELM theories and algorithms argue that “random hidden neurons” capture the essence of some brain learning mechanisms as well as the intuitive sense that the efficiency of brain learning need not rely on computing power of neurons. Thus, compared with traditional neural networks and support vector machine, ELM offers significant advantages such as fast learning speed, ease of implementation, and minimal human intervention. Due to its remarkable generalization performance and implementation efficiency, ELM has been applied in various applications. In this paper, we first provide an overview of newly derived ELM theories and approaches. On the other hand, with the ongoing development of multilayer feature representation, some new trends on ELM-based hierarchical learning are discussed. Moreover, we also present several interesting ELM applications to showcase the practical advances on this subject.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call