Abstract

Most algorithms used for training feedforward neural networks (NN) are based on the minimization of a least squares output error cost function. The use of such a cost function provides good results when the training set is composed of noisy outputs and exactly known inputs. However, when collecting data under an identification experiment, it may not be possible to avoid noise when measuring the inputs. Then, the use of these algorithms estimates biased NN parameters when the training inputs are corrupted by noise, leading to biased predicted outputs. This paper proposes a cost function whose optimisation reduces the effect of the input noise on the estimated NN parameters. Its construction is based on adding a specific regularization tern to the least squares output error cost function. A simulation example is presented to demonstrate the robustness to noisy inputs of the NN trained with this cost function.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call