Abstract

A numerous group of optimization algorithms based on heuristic techniques have been proposed in recent years. Most of them are based on phenomena in nature and require the correct tuning of some parameters, which are specific to the algorithm. Heuristic algorithms allow problems to be solved more quickly than deterministic methods. The computational time required to obtain the optimum (or near optimum) value of a cost function is a critical aspect of scientific applications in countless fields of knowledge. Therefore, we proposed efficient algorithms parallel to Teaching-learning-based optimization algorithms. TLBO is efficient and free from specific parameters to be tuned. The parallel proposals were designed with two levels of parallelization, one for shared memory platforms and the other for distributed memory platforms, obtaining good parallel performance in both types of parallel architectures and on heterogeneous memory parallel platforms.

Highlights

  • The purpose of optimization algorithms is to find the optimal value for a particular cost function.Cost functions, depending on the application in which they are used, can be highly complex, it may be necessary to repeatedly obtain a new optimum value, and they may present different numbers of parameters

  • The parallel platform used was composed of HP Proliant SL390 G7 nodes, where each node was equipped with two Intel

  • Worthy of note the Teaching-learning-based optimization (TLBO) parallel proposal presented in [13] obtains efficiencies of only between 20% and 30% for 16 and processes respectively, and other parallel proposals applied to the state-of-the-art algorithms Dual Population Genetic Algorithm (DPGA)

Read more

Summary

Introduction

The purpose of optimization algorithms is to find the optimal value for a particular cost function. Metaheuristic methods employ guided search techniques, in which some random processes are involved to solve the problem, it cannot be formally proven that the optimal value obtained is the solution to the problem. Explosion Method (GEM), Genetic Algorithms (GA) and its variants, Differential Evolution (DE) and its variants, Simulated Annealing (SA) algorithm and the Tabu Search (TS) algorithm can be mentioned In most of these algorithms, it is necessary to adjust one or more parameters first, for example, GA needs crossover probability, mutation probability, selection operator, etc. Authors in [15] implemented the Dual Population Genetic Algorithm (DPGA) on a parallel architecture obtaining average speed-up values of 1.64x using both 16 and 32 processors.

The TLBO Algorithm
23: Output population in learner phase
Parallel Approaches
21: Obtain Best Solution and Statistical Data
28: Replace Xis by Xi0s
1: Learner phase
20: Compute the mean of design variables in subpopulation M js
26: Collect all the solutions and obtain Best Solution
Numerical Results
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call