Abstract
A numerous group of optimization algorithms based on heuristic techniques have been proposed in recent years. Most of them are based on phenomena in nature and require the correct tuning of some parameters, which are specific to the algorithm. Heuristic algorithms allow problems to be solved more quickly than deterministic methods. The computational time required to obtain the optimum (or near optimum) value of a cost function is a critical aspect of scientific applications in countless fields of knowledge. Therefore, we proposed efficient algorithms parallel to Teaching-learning-based optimization algorithms. TLBO is efficient and free from specific parameters to be tuned. The parallel proposals were designed with two levels of parallelization, one for shared memory platforms and the other for distributed memory platforms, obtaining good parallel performance in both types of parallel architectures and on heterogeneous memory parallel platforms.
Highlights
The purpose of optimization algorithms is to find the optimal value for a particular cost function.Cost functions, depending on the application in which they are used, can be highly complex, it may be necessary to repeatedly obtain a new optimum value, and they may present different numbers of parameters
The parallel platform used was composed of HP Proliant SL390 G7 nodes, where each node was equipped with two Intel
Worthy of note the Teaching-learning-based optimization (TLBO) parallel proposal presented in [13] obtains efficiencies of only between 20% and 30% for 16 and processes respectively, and other parallel proposals applied to the state-of-the-art algorithms Dual Population Genetic Algorithm (DPGA)
Summary
The purpose of optimization algorithms is to find the optimal value for a particular cost function. Metaheuristic methods employ guided search techniques, in which some random processes are involved to solve the problem, it cannot be formally proven that the optimal value obtained is the solution to the problem. Explosion Method (GEM), Genetic Algorithms (GA) and its variants, Differential Evolution (DE) and its variants, Simulated Annealing (SA) algorithm and the Tabu Search (TS) algorithm can be mentioned In most of these algorithms, it is necessary to adjust one or more parameters first, for example, GA needs crossover probability, mutation probability, selection operator, etc. Authors in [15] implemented the Dual Population Genetic Algorithm (DPGA) on a parallel architecture obtaining average speed-up values of 1.64x using both 16 and 32 processors.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have