Abstract

Approaching to the biological neural network, small-world neural networks have been demonstrated to improve the generalization performance of artificial neural networks. However, the architecture of small-world neural networks is typically large and predefined. This may cause the problems of overfitting and time consuming, and cannot obtain an optimal network structure automatically for a given problem. To solve the above problems, this paper proposes a pruning feedforward small-world neural network (PFSWNN), and applies it to nonlinear system modeling. Firstly, a feedforward small-world neural network (FSWNN) is constructed according to the rewiring rule of Watts–Strogatz. Secondly, the importance of each hidden neuron is evaluated based on its Katz centrality. If the Katz centrality of a hidden neuron is below the predefined threshold, this neuron is considered to be an unimportant node and then merged with its most correlated neuron in the same hidden layer. The connection weights are trained using the gradient-based algorithm, and the convergence of the proposed PFSWNN is theoretically analyzed in this paper. Finally, the PFSWNN model is tested on some problems for nonlinear system modeling, including the approximation for a rapidly changing function, CATS missing time-series prediction, four benchmark problems of UCI public datasets and a practical problem for wastewater treatment process. Experimental results demonstrate that PFSWNN exhibits superior generalization performance by small-world property as well as the pruning algorithm, and the training time of PFSWNN is shortened owning to a compact structure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call