Abstract

A group L 1/2 regularization term is defined and introduced into the conventional error function for pruning the hidden layer nodes of feedforward neural networks. This group L 1/2 regularization method (GL 1/2 ) can prune not only the redundant hidden nodes but also the redundant weights of the surviving hidden nodes of the neural networks. As a comparison, the popular group lasso regularization (GL 2 ) can prune the redundant hidden nodes, but cannot prune any redundant weights of the surviving hidden nodes, of the neural networks. A disadvantage of the GL 1/2 is that it involves a non-smooth absolute value function, which causes oscillation in the numerical computation and difficulty in the convergence analysis. As a remedy, the absolute value function is approximated by a smooth function, resulting in a smooth group L 1/2 regularization method (SGL 1/2 ). Numerical simulations on a few benchmark data sets show that, compared with GL 2 , SGL 1/2 can achieve better accuracy and remove more redundant nodes and weights of the surviving hidden nodes. A convergence theorem is also proved for SGL 1/2 .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call