Abstract
Extreme learning machine (ELM) is an emerging learning algorithm for the generalized single hidden layer feedforward neural networks in which the parameters of hidden units are randomly generated and thus the output weights can be analytically calculated. From the hidden to output layer, ELM essentially learns the output weight matrix based on the least squares regression formula that can be used for both classification/regression and dimensionality reduction. In this paper, we impose the orthogonal constraint on the output weight matrix and then formulate an orthogonal extreme learning machine (OELM) model, which produces orthogonal basis functions and can have more locality preserving power from ELM feature space to output layer than ELM. Since the locality preserving ability is potentially related to the discriminating power, the OELM is expect to have more discriminating power than ELM. Considering the case that the number of hidden units is usually greater than the number of classes, we propose an effective method to optimize the OELM objective by solving an orthogonal procrustes problem. Experiments by pairwisely comparing OELM with ELM on three widely used image data sets show the effectiveness of learning orthogonal mapping especially when given only limited training samples.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.