Abstract

In locally recurrent neural networks, the output of a dynamic neuron is only fed back to itself. This particular structure makes it possible to train the network sequentially. A sequential orthogonal training method is developed in this chapter to train locally recurrent neural networks. The networks considered here contain a single-hidden-layer and dynamic neurons are located in the hidden layer. During network training, the first hidden neuron is used to model the relationship between inputs and outputs whereas other hidden neurons are added sequentially to model the relationship between inputs and model residuals. When adding a hidden neuron, its contribution is due to that part of its output vector which is orthogonal to the space spanned by the output vectors of the previous hidden neurons. The Gram-Schmidt orthogonalisation technique is used at each training step to form a set of orthogonal bases for the space spanned by the hidden neuron outputs. The optimum hidden layer weights can be obtained through gradient based optimisation method while the output layer weights can be found using least squares regression. Hidden neurons are added sequentially and the training procedure terminates when the model error is lower than a predefined level. Using this training method, the necessary number of hidden neurons can be found and, hence, avoiding the problem of over fitting. Neurons with mixed types of activation functions and dynamic orders can be incorporated into a single network. Mixed node networks can offer improved performance in terms of representation capabilities and network size parsimony. The excellent performance of the proposed technique is demonstrated by application examples.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.