Abstract

A radial basis function neural network (RBFNN), with a strong function approximation ability, was proven to be an effective tool for nonlinear process modeling. However, in many instances, the sample set is limited and the model evaluation error is fixed, which makes it very difficult to construct an optimal network structure to ensure the generalization ability of the established nonlinear process model. To solve this problem, a novel RBFNN with a high generation performance (RBFNN-GP), is proposed in this paper. The proposed RBFNN-GP consists of three contributions. First, a local generalization error bound, introducing the sample mean and variance, is developed to acquire a small error bound to reduce the range of error. Second, the self-organizing structure method, based on a generalization error bound and network sensitivity, is established to obtain a suitable number of neurons to improve the generalization ability. Third, the convergence of this proposed RBFNN-GP is proved theoretically in the case of structure fixation and structure adjustment. Finally, the performance of the proposed RBFNN-GP is compared with some popular algorithms, using two numerical simulations and a practical application. The comparison results verified the effectiveness of RBFNN-GP.

Highlights

  • In recent years, with the continuous development of artificial intelligence and intelligent algorithms, data-driven methods have been widely used as an effective modeling method because they do not require complex mathematical models and high maintenance costs

  • There are still some problems to be solved in practice, for example, how to extend the network performance from limited training set to invisible data set, that is, how to design radial basis function neural network (RBFNN) with a good generalization ability [5,6]

  • The generalization performance of RBFNN is usually measured by generalization error, which mainly includes the approximation error caused by the insufficient representation ability of network and estimation errors caused by a limited number of samples

Read more

Summary

Introduction

With the continuous development of artificial intelligence and intelligent algorithms, data-driven methods have been widely used as an effective modeling method because they do not require complex mathematical models and high maintenance costs. It is worth mentioning that references [5,7,8,9,10] deal with the problem of estimation error according to different assumptions On this basis, the sample complexity of finite networks is studied to demonstrate that, when the data tends to infinity, the estimation error tends to zero. Due to the limited number of samples, even if the optimal parameter setting is obtained, it will produce functions far from the target, resulting in errors and a poor generalization performance [9]. To solve this problem, Barron et al [10]

Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call