Abstract

Combinatorial optimization problems are often NP-hard and too complex to be solved within a reasonable time frame by exact methods. Heuristic methods, which do not offer a convergence guarantee could obtain some satisfactory resolution for combinatorial optimization problems. However, it is not only very time-consuming for Central Processing Units (CPU) but also very difficult to obtain an optimized solution when solving large problem instances. So, parallelism can be a good technique for reducing the time complexity, as well as improving the solution quality. Nowadays Graphics Processing Units (GPUs) have evolved supporting general purpose computing. GPUs have become many core processors, multithreaded, highly parallel with high bandwidth memory and tremendous computational power due to the market demand for high definition and real time 3D graphics. Compared to CPU threads, GPU threads are very lightweight which means that changing the context between two threads is not a costly operation. So, GPU cores provide a low-cost opportunity to parallelizing metaheuristics for combinatorial optimization problems. The major issues for GPU parallelization are: 1) efficient distribution of data processing between GPU and CPU, 2) efficient parallelism control for the thread synchronization, 3) efficient memory management for the optimization of data transfer between the different memories, the capacity constraints of these memories, etc. Our proposed work aims to designing an efficient GPU framework which can efficiently deal with these important parallelization issues with GPU while parallelizing heuristics methods like hill climbing, simulated annealing algorithm etc. for solving large optimization problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call