Abstract

In this paper, two modified constrained learning algorithms are proposed to obtain better generalization performance and faster convergence rate. The additional cost terms of the first algorithm are selected based on the first-order derivatives of the activation functions of the hidden neurons and the second-order derivatives of the activation functions of the output neurons, while the additional cost terms of the second one are selected based on the first-order derivatives of the activation functions of the output neurons and the second-order derivatives of the activation functions of the hidden neurons. In the course of training, the additional cost terms of the proposed algorithms can penalize the input-to-output mapping sensitivity and the high frequency components simultaneously so that the better generalization performance can be obtained. Finally, theoretical justifications and simulation results are given to verify the efficiency and effectiveness of our proposed learning algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call