Abstract

This paper provides an enhanced method focused on reducing the computational complexity of function approximation problems by dividing the input data vectors into small groups, which avoids the curse of dimensionality. The computational complexity and memory requirements of the approximated problem are higher when the input data dimensionality increases. A divide-and-conquer algorithm is used to distribute the input data of the complex problem to a divided radial basis function neural network (Div-RBFNN). Under this algorithm, the input data variables are typically distributed to different RBFNNs based on whether the number of the input data dimensions is odd or even. In this paper, each Div-RBFNN will be executed independently and the resulting outputs of all Div-RBFNNs will be grouped using a system of linear combination function. The parameters of each Div-RBFNN (centers, radii, and weights) will be optimized using an efficient learning algorithm, which depends on an enhanced clustering algorithm for function approximation, which clusters the centers of the RBFs. Compared to traditional RBFNNs, the proposed methodology reduces the number of executing parameters. It further outperforms traditional RBFNNs not only with respect to the execution time but also in terms of the number of executing parameters of the system, which produces better approximation error.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.