Abstract

Developing general global optimization algorithms is a difficult task, specially for functions with a huge number of local minima in high dimensions. Stochastic metaheuristic algorithms can provide the only alternative for the solution of such problems since they are aimed at guaranteeing global optimality. However, the main drawback of these algorithms is that they require a large number of function evaluations in order to skip/discard local optima, thus exhibiting a low convergence order and, as a result, a high computational cost. Furthermore, the situation can become even worse with the increase of dimension. Usually the number of local minima highly increases, as well as the computational cost of the function evaluation, thus increasing the difficulty for covering the whole search space. On the other hand, deterministic local optimization methods exhibit faster convergence rates, requiring a lower number of functions evaluations and therefore involving a lower computational cost, although they can get stuck into local minima. A way to obtain faster global optimization algorithms is to mix local and global methods in order to benefit from higher convergence rates of local ones, while retaining the global approximation properties. Another way to speedup global optimization algorithms comes from the use of efficient parallel hardware architectures. Nowadays, a good alternative is to take advantage of graphics processing units (GPUs), which are massively parallel processors and have become quite accessible cheap alternative for high performance computing. In this work a parallel implementation on GPUs of some hybrid two-phase optimization methods, that combine the metaheuristic Simulated Annealing algorithm for finding a global minimum, with different local optimization methods, namely a conjugate gradient algorithm and a version of Nelder–Mead method, is presented. The performance of parallelized versions of the above hybrid methods are analyzed for a set of well known test problems. Results show that GPUs represent an efficient alternative for the parallel implementation of two-phase global optimization methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call