Abstract

In this paper, two popular types of neural network models (radial base function (RBF) and multi-layered feed-forward (MLF) networks) trained by the generalized delta rule, are tested on their robustness to random errors in input space. A method is proposed to estimate the sensitivity of network outputs to the amplitude of random errors in the input space, sampled from known normal distributions. An additional parameter can be extracted to give a general indication about the bias on the network predictions. The modelling performances of MLF and RBF neural networks have been tested on a variety of simulated function approximation problems. Since the results of the proposed validation method strongly depend on the configuration of the networks and the data used, little can be said about robustness as an intrinsic quality of the neural network model. However, given a data set where ‘pure’ errors from input and output space are specified, the method can be applied to select a neural network model which optimally approximates the nonlinear relations between objects in input and output space. The proposed method has been applied to a nonlinear modelling problem from industrial chemical practice. Since MLF and RBF networks are based on different concepts from biological neural processes, a brief theoretical introduction is given.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call