Abstract

In this paper, adaptive learning algorithms to obtain better generalization performance are proposed. We specifically designed cost terms for the additional functionality based on the first- and second-order derivatives of neural activation at hidden layers. In the course of training, these additional cost functions penalize the input-to-output mapping sensitivity and high-frequency components in training data. A gradient-descent method results in hybrid learning rules to combine the error back-propagation, Hebbian rules, and the simple weight decay rules. However, additional computational requirements to the standard error back-propagation algorithm are almost negligible. Theoretical justifications and simulation results are given to verify the effectiveness of the proposed learning algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call