Abstract

Random Weight Networks have been extensively used in many applications in the last decade because it has many strong features such as fast learning and good generalization performance. Most of the traditional training techniques for Random Weight Networks randomly select the connection weights and hidden biases and thus suffer from local optima stagnation and degraded convergence. The literature shows that stochastic population-based optimization techniques are well regarded and reliable alternative for Random Weight Networks optimization because of high local optima avoidance and flexibility. In addition, many practitioners and non-expert users find it difficult to set the other parameters of the network like the number of hidden neurons, the activation function, and the regularization factor. In this paper, an approach for training Random Weight Networks is proposed based on a recent variant of particle swarm optimization called competitive swarm optimization. Unlike most of Random Weight Networks training techniques, which are used to optimize only the input weights and hidden biases, the proposed approach will automatically tune the weights, biases, the number of hidden neurons, and regularization factor as well as the embedded activation function in the network, simultaneously. The goal is to help users to effectively identify a proper structure and hyperparameter values to their applications while obtaining reasonable prediction results. Twenty benchmark classification datasets are used to compare the proposed approach with different types of basic and hybrid Random Weight Network-based models. The experimental results on the benchmark datasets show that the reasonable classification results can be obtained by automatically tuning the hyperparameters using the proposed approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.