Abstract

The extreme learning machine (ELM) requires a large number of hidden layer nodes in the training process. Thus, random parameters will exponentially increase and affect network stability. Moreover, the single activation function affects the generalization capability of the network. This paper proposes a derived least square fast learning network (DLSFLN) to solve the aforementioned problems. DLSFLN uses the inheritance of some functions to obtain various activation functions through continuous differentiation of functions. The types of activation functions were increased and the mapping capability of hidden layer neurons was enhanced when the random parameter dimension was maintained. DLSFLN randomly generates the input weights and hidden layer thresholds and uses the least square method to determine the connection weights between the output and the input layers and that between the output and the input nodes. The regression and classification experiments show that DLSFLN has a faster training speed and better training accuracy, generalization capability, and stability compared with other neural network algorithms, such as fast learning network(FLN).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call