Abstract

The commonly used training algorithms for multi-layered neural networks are based on the method of steepest descent (e.g. the backpropagation algorithm). These algorithms usually have very bad convergence behaviour, i.e. many thousands of iterations are needed to reach acceptable accuracy. Furthermore these algorithms do not exploit the special structure of a multi-layered neural network which consists of a linear part (a scalar product) and a non-linear part (usually a sigmoid activity function). The class of algorithms presented here does not backpropagate the learning errors but an approximation of the desired net-output of the examples to be learnt. This enables us to formulate on each layer of the network the approximate desired output of the layer. From here it is possible to apply any learning algorithm for neural networks to a chosen subnet of consecutive network layers. We partitioned neural networks in two and/or three layer subnets and trained those subnets seperately. In many cases the learning progress was much faster than with classical backpropagation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call