Abstract

The stochastic global optimization (SGO) methods like particle swarm optimization (PSO), genetic algorithm (GA), and cuckoo search (CS) have been widely used in a variety of optimization problems partly because of the ability to find the global optimum. Most existing SGO algorithms are designed for gradient-free problems and ignore the gradient information even if the gradient is readily available, resulting in low efficiency and high computational cost. In this paper, we introduce a hybrid self-adaptive gradient-based cuckoo search (HAGCS) to tackle this limitation. HAGCS first takes a gradient-based local random walk to explore the search space, and then uses gradient-based local optimization (GBLO) to find a local minimum near to the current best solution, which is more efficient and precise than standard CS. Additionally, in order to avoid premature convergence potentially being caused by the use of the gradient, we introduce two novel self-adaptation and diversity promotion strategies onto HAGCS. These help HAGCS find proper control parameters and prevent HAGCS from getting stuck at local minima or stationary points. Lastly, we compare HAGCS with PSO, GA, CS, and 5 refinements of CS on 12 benchmark functions. Compared to the other methods, the experiment results show that the proposed method HAGCS has about 2 times faster convergence speed, higher accuracy, and 27.5% higher success rate of finding the global minimum in high-dimension problems. Even when the dimension of the problem is 1000, HAGCS still offers a success rate of 64% to find the global minima accurately.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call