Abstract

A sequential orthogonal approach to the building and training of single hidden layer neural networks is presented in this paper. In the proposed method, hidden neurons are added one at a time. The procedure starts with a single hidden neuron and sequentially increases the number of hidden neurons until the model error is sufficiently small. When adding a neuron, the new information introduced by this neuron is caused by that part of its output vector which is orthogonal to the space spanned by the output vectors of previously added hidden neurons. The classical Gram–Schmidt orthogonalization method is used at each step to form a set of orthogonal bases for the space spanned by the output vectors of hidden neurons. Hidden layer weights are found through optimization while output layer weights are obtained from the least-squares regression. Using the proposed technique, it is possible to determine the necessary number of hidden neurons required. A regularization factor is also incorporated into the sequential orthogonal training algorithm to improve the network generalization capability. An additional advantage of this method is that it can be used to build and train neural networks with mixed types of hidden neurons and thus to develop hybrid models. By using mixed types of neurons, it is found that more accurate neural network models, with a smaller number of hidden neurons than seen in conventional networks, can be developed. The proposed sequential orthogonal training method was successfully applied to three non-linear modelling examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call