Abstract

Most supervised neural networks are trained by minimizing the mean square error (MSE) of the training set. In the presence of outliers, the resulting neural network model can differ significantly from the underlying model that generates the data. This paper outlines two robust learning methods for a dynamic structure neural network called incremental growing multi-experts network (IGMN). It is convincingly shown by simulation that by using a scaled robust objective function instead of the least squares function, the influence of the outliers in the training data can be completely eliminated. The network generates a much better approximation in the neighborhood of outliers. Thus, the two proposed robust learning methods namely robust least mean squares (RLMSs) and least mean log squares (LMLSs) are insensitive to the presence of outliers unlike the least mean squares (LMSs) cost function. Moreover, various types of supervised learning algorithms can easily adopt LMLS, which is a parameter-free method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.