Abstract
Regularization theory presents a sound framework to solving supervised learning problems. However, there is a gap between the theoretical results and practical suitability of regularization networks (RN). Radial basis function networks (RBF) can be seen as a special case of regularization networks with a selection of learning algorithms. We study a relationship between RN and RBF, and experimentally evaluate their approximation and generalization ability with respect to number of hidden units.KeywordsRegularizationRadial Basis Function NetworksGeneralization
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have