Abstract
Due to the simplicity of its implementation and the impressive performance, Extreme Learning Machine (ELM) has been widely used in applications of machine learning. However, there are two potential problems in ELM: 1) lack of an efficient method for minimizing error; 2) consideration of little inherent structural information about correlations among output components. To overcome those problems, this paper proposes two improvements of ELM: Optimal Distribution Extreme Learning Machine (OD-ELM) and Space Embedded Extreme Learning Machine (SE-ELM). Based on our recent discovery that the distributions of the input weights and the bias of hidden nodes in ELM play an important role in the performance of ELM, OD-ELM can reduce the training error by the usage of the derivatives of training error w.r.t the distributions. Simulation results tested on the UCI dataset demonstrate and verify that OD-ELM has better generalization performance than traditional ELM. SE-ELM can ‘embed’ the inherent structural information among outputs into the predictor. SE-ELM captures not only the interdependencies between variables, as in a typical ELM, but also those responses, so correlations among both inputs and outputs are considered. Meanwhile, SE-ELM retains some characteristics of ELM, such as the simplicity of implementation and the hidden layer without tuning. We examine the three methods of embedding on HumanEva benchmark, which is a well-known benchmark about 3D human pose reconstruction. As verified by the simulation results, SE-ELM tends to have better generalization performance than classical ELM.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have