As a novel learning algorithm for a single hidden-layer feedforward neural network, the extreme learning machine has attracted much research attention for its fast training speed and good generalization performances. Instead of iteratively tuning the parameters, the extreme machine can be seen as a linear optimization problem by randomly generating the input weights and hidden biases. However, the random determination of the input weights and hidden biases may bring non-optimal parameters, which have a negative impact on the final results or need more hidden nodes for the neural network. To overcome the above drawbacks caused by the non-optimal input weights and hidden biases, we propose a new hybrid learning algorithm named dolphin swarm algorithm extreme learning machine adopting the dolphin swarm algorithm to optimize the input weights and hidden biases efficiently. Each set of input weights and hidden biases is encoded into one vector, namely the dolphin. The dolphins are evaluated by root mean squared error and updated by the four pivotal phases of the dolphin swarm algorithm. Eventually, we will obtain an optimal set of input weights and hidden biases. To evaluate the effectiveness of our method, we compare the proposed algorithm with the standard extreme learning machine and three state-of-the-art methods, which are the particle swarm optimization extreme learning machine, evolutionary extreme learning machine, and self-adaptive evolutionary extreme learning machine, under 13 benchmark datasets obtained from the University of California Irvine Machine Learning Repository. The experimental results demonstrate that the proposed method can achieve superior generalization performances than all the compared algorithms.
Read full abstract