Abstract

In this paper, two improved constrained learning algorithms that are able to guarantee to obtain better generalization performance are proposed. These two algorithms are substantially on-line learning ones. The cost term for the additional functionality of the first improved algorithm is selected based on the first-order derivatives of the neural activation at hidden layers, while the one of the second improved algorithm is selected based on second-order derivatives of the neural activation at hidden layers and output layer. In the course of training, the cost terms selected from these additional cost functions can penalize the input-to-output mapping sensitivity or high-frequency components included in training data. Finally, theoretical justifications and simulation results are given to verify the efficiency and effectiveness of the two proposed learning algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call