Abstract

In this paper, a new learning algorithm which encodes a priori information into feedforward neural networks is proposed for function approximation problem. The new algorithm considers two kinds of constraints, which are architectural constraints and connection weight constraints, from a priori information of function approximation problem. On one hand, the activation functions of the hidden neurons are specific polynomial functions. On the other hand, the connection weight constraints are obtained from the first-order derivative of the approximated function. The new learning algorithm has been shown by theoretical justifications and experimental results to have better generalization performance and faster convergent rate than other algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call