Abstract

Research community has recently put more attention to the Extreme Learning Machines (ELMs) algorithm in Neural Network (NN) area. The ELMs are much faster than the traditional gradient-descent-based learning algorithms due to its analytical determination of output weights with the random choice of input weights and hidden layer bias. However, since the input weights and bias are randomly assigned and not adjusted, the ELMs model shows an instability if we repeat the experiments many times. Such instability makes the ELMs less reliable than other computational intelligence models. In our investigation, we try to solve this problem by using the Random Production in the first layer of the ELMs. Thus, we can reduce the chance of using random weight assignment in ELMs by removing the bias in the hidden layer. Experiment son different data sets demonstrate that the proposed model has higher stability and reliability than the classical ELMs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call