Abstract

ion: the estimated weights do not have physical significance. Interpolation versus Extrapolation: How do we know when a given estimated model is sufficiently well-supported by the network having converged, and utilizing sufficiently dense and accurate measurements neighboring the desired evaluation point? Issues Affecting Practical Convergence: A priori learning versus on-line adaptation? Actually, when the ANN architecture is fixed a priori, then the family of solvable problems is implicitly constrained that means the architecture of the network should be learned to ensure efficient and accurate modelling of the particular system behavior. In this paper, we present an algorithm for learning an ideal two-layer neural network with radial basis functions as activation functions known as a Radial Basis Function Network (RBFN), to approximate the input-output response of synthetic jet actuators (SJA) based wing planform. The structure of the paper is as follows: a brief introduction to several existing learning algorithms will be provided followed by the details of the suggested learning algorithm. Finally, the performance of the learning algorithm will be demonstrated by different simulation and experimental results. Intelligent Radial Basis Function Networks In the past two decades, neural networks (NN) have emerged as a powerful tool in the areas of pattern classification, time series analysis, signal processing, dynamical system modelling and control. The emergence of NN can be attributed to the fact that they are able to learn behavior when traditional modelling is very difficult to generalize. While the successes have been many, there are also drawbacks of various fixed architecture implementations paving way for the necessity of improved networks that monitor the health of input-output models and learning algorithms. Typically, a neural network consists of many computational nodes called perceptrons arranged in layers. The number of hidden nodes (perceptrons) determine the degrees of freedom of the non-parametric model. A small number of hidden units may not be enough to capture the the complex input-output m a p ping and large number of hidden units may overfit the data and may not generalize behavior. Further, the optimal number of hidden units depends upon a lot of factors like number of data points, signal to noise ratio, complexity of the learning algorithms etc. Beside this, it is also natural to ask how many hidden layers are required to model the input-output mapping? The answer to this question is provided by Kolmogorov's theorem.' Kolmogorov's Theorem. Let f(x) i s a continuous function defined on a unit hypercube I n (l = [O, 11 and n 2 2 ) then there exist simple functions q5j and qij such that f (x) can be represented in following form: The relationship of Kolmogorov's theorem to practical neural networks is not straightfornard as the functions dj and qij can be very complex and not smooth as favored by NN. But Kolmogorov's theorem (later modified by other researchers2) can be used to prove that any continuous function from input to output can

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.