Abstract

A new hybrid method for global optimization of continuous functions is proposed. It is a combination of an extended random search method and a descent method. Random search is used as the global search strategy. A newly developed distribution-based region control makes use of already detected local minima to refine this search strategy. The approach resembles classical step size control in deterministic optimization. The descent method is embedded as a local search strategy for the detection of local minima. A special realization of this approach is presented in this paper and called CGRS. In CGRS the conjugate gradient method is utilized as descent method. The proof of global convergence in probability for CGRS is given and extended to other descent methods used in the hybrid optimization approach. In order to demonstrate the numerical properties of the approach test sets of multidimensional non-convex optimization problems are solved. The results are compared to well-established hybrid methods for global optimization. The new algorithm shows a high success rate with good and adjustable solution precision. Parameter tuning is not necessary, but of course possible. The new method proves to be efficient in terms of computational costs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call