Abstract
Extreme learning machine (ELM), as an emergent technique for training feed-forward neural networks, has shown good performances on various learning domains. This paper investigates the impact of random weights during the training of ELM. It focuses on the randomness of weights between input and hidden layers, and the dimension change from input layer to hidden layer. The direct motivation is to verify as to whether during the training of ELM, the randomly assigned weights exert some positive effects. Experimentally we show that for many classification and regression problems, the dimension increase caused by random weights in ELM has a performance better than the dimension increase caused by some kernel mappings. We assume that via the random transformation, output-samples are more concentrate than input-samples which will make the learning more efficient.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have