Abstract

The artificial bee colony (ABC) algorithm, which has been widely studied for years, is a stochastic algorithm for solving global optimization problems. Taking advantage of the information of a global best solution, the Gbest-guided artificial bee colony (GABC) algorithm goes further by modifying the solution search equation. However, the coefficient in its equation is based only on a numerical test and is not suitable for all problems. Therefore, we propose a novel algorithm named the Gbest-guided ABC algorithm with gradient information (GABCG) to make up for its weakness. Without coefficient factors, a new solution search equation based on variable gradients is established. Besides, the gradients are also applied to differentiate the priority of different variables and enhance the judgment of abandoned solutions. Extensive experiments are conducted on a set of benchmark functions with the GABCG algorithm. The results demonstrate that the GABCG algorithm is more effective than the traditional ABC algorithm and the GABC algorithm, especially in the latter stages of the evolution.

Highlights

  • As computer technology gains momentum, more and more researchers develop and apply optimization algorithms to solve optimization problems

  • The results demonstrate that the guided ABC algorithm with gradient information (GABCG) algorithm is more effective than the traditional artificial bee colony (ABC) algorithm and the guided artificial bee colony (GABC) algorithm, especially in the latter stages of the evolution

  • We have proposed an improved Gbest-guided artificial bee colony algorithm with the gradient information, which is called the GABCG algorithm

Read more

Summary

Introduction

As computer technology gains momentum, more and more researchers develop and apply optimization algorithms to solve optimization problems These algorithms can be broadly divided into gradient-based and gradient-free (or stochastic) optimization algorithms. Gradient-based optimization algorithms search along the gradient direction They are of high convergence rate and are suitable for solving problems with large design space [1]. Gradient-free optimization algorithms are robust and quite effective for solving multi-modal optimization problems [3]. Such algorithms can be integrated into different optimization designs, but they are computationally expensive, especially in cases of numerous design parameters [4]. It is significant to improve the convergence of stochastic algorithms through some modifications

Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call