Abstract

Although they are powerful and successful in many applications, artificial neural networks (ANNs) typically do not perform well with complex problems that have a limited number of training cases. Often, collecting additional training data may not be feasible or may be costly. Thus, this work presents a new radial-basis network (RBN) design that overcomes the limitations of using ANNs to accurately model regression problems with minimal training data. This new design involves a multi-stage training process that couples an orthogonal least squares (OLS) technique with gradient-based optimization. New termination criteria are also introduced to improve accuracy. In addition, the algorithms are designed to require minimal heuristic parameters, thus improving ease of use and consistency in performance. The proposed approach is tested with experimental and practical regression problems, and the results are compared with those from typical network models. The results show that the new design demonstrates improved accuracy with reduced dependence on the amount of training data. As demonstrated, this new ANN provides a platform for approximating potentially slow but high-fidelity computational models, and thus fostering inter-model connectivity and multi-scale modeling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call