Abstract

A sequential orthogonal training method is developed in this paper to train locally recurrent neural networks which contain a single hidden layer and dynamic neurons are located in the hidden layer. During network training, the first hidden neuron is used to model the relationship between inputs and outputs whereas other hidden neurons are added sequentially to model the relationship between inputs and model residues. When adding a hidden neuron, its contribution is due to that part of its output vector which is orthogonal to the space spanned by the output vectors of the previous hidden neurons. The Gram-Schmidt orthogonalisation technique is used at each training step to form a set of orthogonal bases for the space spanned by the hidden neuron outputs. The optimum hidden layer weights can be obtained through a gradient based optimisation method while the output layer weights can be found using least squares regression. Hidden neurons are added sequentially and the training procedure terminates when the model error is lower than a pre-set level. Using this training method, neurons with mixed types of activation functions and dynamic orders can be incorporated into a single network. Mixed node networks can offer improved performance in terms of representation capabilities and network size. The excellent performance of the proposed technique is demonstrated by application examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call