Abstract

Seeker optimization algorithm (SOA) is a novel population-based heuristic stochastic search algorithm, which is based on the concept of simulating the act of human searching. In the SOA, the search direction is determined by seeker’s egotistic behavior, altruistic behavior and pro-activeness behavior, while step length is given by uncertainty reasoning behavior. In this paper, the application of the SOA to tuning the structures and parameters of artificial neural networks (ANNs) is presented as a new evolutionary method of ANN training. Simulation experiments for pattern classification and function approximation are performed. The comparisons of the SOA between BP algorithms and other evolutionary algorithms (EAs) are studied. The simulation results show that the performance of the SOA is better than or, at least, equivalent to that of other EAs (i.e., DE and two variations of PSO) for all the listed problems. Moreover, the ANNs with link switches trained by the SOA can provide better or comparable learning capabilities with much less number of links than ones by BP algorithms (i.e., GDX, RP, OSS and SCG). Hence, SOA can simultaneously tune the structures and the weight values, and, though SOA is more computationally intensive, it is believed that SOA will become a promising candidate for training ANNs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.