Abstract
Artificial Neural Networks (ANNs) offer unique opportunities in numerous research fields. Due to their remarkable generalization capabilities, they have grabbed attention in solving challenging problems such as classification, function approximation, pattern recognition and image processing that can be quite complex to model mathematically in practice. One of the most vital issues regarding the ANNs is the training process. The aim at this stage is to find the optimum values of ANN parameters such as weights and biases, which indeed embed the whole information of the network. Traditional gradient-descent-based training methods include various algorithms, of which the backpropagation is one of the best-known. Such methods have been shown to exhibit outstanding results, however, they are known have two major theoretical and computational limitations, which are slow convergence speed and possible local minima issues. For this purpose, numerous stochastic search algorithms and heuristic methods have been individually used to train ANNs. However, methods, bringing diverse features of different optimizers together are still lacking in the related literature. In this regard, this paper aims to develop a training algorithm operating based on a hyper-heuristic (HH) framework, which indeed resembles reinforcement learning-based machine learning algorithm. The proposed method is used to train Feed-forward Neural Networks, which are specific forms of ANNs. The proposed HH employs individual metaheuristic algorithms such as Particle Swarm Optimization (PSO), Differential Evolution (DE) Algorithm and Flower Pollination Algorithm (FPA) as low-level heuristics. Based on a feedback mechanism, the proposed HH learns throughout epochs and encourages or discourages the related metaheuristic. Thus, due its stochastic nature, HH attempts to avoid local minima, while utilizing promising regions in search space more conveniently by increasing the probability of invoking relatively more promising heuristics during training. The proposed method is tested in both function approximation and classification problems, which have been adopted from UCI machine learning repository and existing literature. According to the comprehensive experimental study and statistically verified results, which point out significant improvements, the developed HH based training algorithm can achieve significantly superior results to some of the compared optimizers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Engineering Science and Technology, an International Journal
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.