Abstract
For many decades, artificial neural network (ANN) proves successful results in thousands of problems in many disciplines. Back-propagation (BP) is one of the candidate algorithms to train ANN. Due to the way of BP to find the solution for the underlying problem, there is an important drawback of it, namely the stuck in local minima rather than the global one. Recent studies introduce meta-heuristic techniques to train ANN. The current work proposes a framework in which grey wolf optimizer (GWO) provides the initial solution to a BP ANN. Five datasets are used to benchmark GWO BP performance with other competitors. The first competitor is an optimized BP ANN based on genetic algorithm. The second is a BP ANN powered by particle swarm optimizer. The third is the BP algorithm itself and lastly a feedforward ANN enhanced by GWO. The carried experiments show that GWOBP outperforms the compared algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.