Abstract

high dimensional pattern recognition problems, the learning speed of gradient based training algorithms (back-propagation) is generally very slow. Local minimum, improper learning rate and over-fitting are some of the other issues. Extreme learning machine was proposed as a non-iterative learning algorithm for single-hidden layer feed forward neural network (SLFN) to overcome these issues. The input weight and biases are chosen randomly in ELM which makes the classification system of non- deterministic behavior. In this paper, a new learning algorithm is presented in which the input weights and the hidden layer biases of SLFN are assigned from basis vectors generated by training space. The output weights and biases are decided through simple generalized inverse operation on output matrix of hidden layer. This makes very fast learning speed and better generalization performance in comparison to conventional learning algorithm as well as ELM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call