Abstract

Though the Extreme Learning Machine (ELM) has become quite popular in recent years, there are no performance guarantees; the resultant networks also tend to be densely connected. The complexity of a learning machine may be measured by the Vapnik-Chervonenkis (VC) dimension, and a small VC dimension leads to good generalization and lower test set errors. The Minimal Complexity Machine (MCM), that has been proposed very recently, shows that it is possible to learn a classifier with minimal VC dimension, leading to sparse representations and good generalization. In this paper, we draw on results from the MCM to propose a hybrid variant of the ELM, termed the Minimal Complexity - Extreme Learning Machine (MC-ELM), in order to realize a robust classifier that minimizes an exact bound on the VC dimension. The MC-ELM solves a linear programming problem for the last layer and offers the advantages of large margin and low VC dimension. In effect, the learning paradigm elucidated in this paper helps us build a classifier which is based on a minimal representation of the training data owing to MCM, and high training speed attributed to ELM. This makes it feasible for use in complex machine learning applications, where these advantages are of significance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.