Abstract

Abstract Two problems occur in the design of feedforward neural networks: the choice of the optimal architecture and the initialization. Generally, input and output data of a system (or a function) are measured and recorded. Then, experimenters wish to design a neural network to map exactly these output values. By formulating this as a continuous approximation problem, this paper shows that the use of orthogonal functions is a partial optimization in the choice of hidden functions. Parameter's initialization is obtained by using the knowledge of input and output data in the calculation of a discrete approximation. The hidden weights are found by constructing orthogonal directions on which the input values are represented. The pseudo-inverse is used to determine output weights such that the Euclidean distances between neural responses and output values are minimised.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call