Abstract

The emergence of GPU-CPU heterogeneous architectures has led to a fundamental paradigm shift in parallel programming. Accelerating Genetic Algorithms (GAs) on these architectures has received significant attention from both practitioners and researchers ever since GPUs emerged. In the past decade we have witnessed many progresses on migrating parallel GAs from CPU to GPU (Graphical Processing Unit) architecture, which makes this research field truly enter into the world of High Performance Computing (HPC), and demonstrates a great potential to many research disciplines and industrial worlds that can benefit from the power of GPU accelerated stochastic global search to explore large and complex search spaces for better solutions.Designing a parallel algorithm on GPU is quite different from designing one on CPU. On CPU architecture, we typically consider how to distribute data across tens of CPU threads, while on GPU architecture, we have more than hundreds of thousands of GPU threads running simultaneously. Therefore, we should rethink the design approaches and implementation strategies of parallel algorithms to fully utilize the computing power of GPUs to accelerate the computation of GAs. The intention of this paper is to give an overview on selective works of parallel GAs designed for GPU architecture.In this survey paper, we first reexamine the concept of granularity of parallelism for GAs on GPU architecture, discuss how the aspect of data layout affect the kernel design to maximize memory bandwidth, and explain how to organize threads in grid and blocks to expose sufficient parallelism to GPU. The comprehensive overview on selective works since 2010 then follows. The focus is mainly on the perspective of GPU architecture: how to accelerate GAs with GPU computing. Performance issues are not touched in this review, because most of these works are conducted on very early GPU cards, which are out of date already.We finally discuss some future research suggestions in the last section, especially about how to build up an efficient implementation of parallel GAs for hyper-scale computing. Many industrial and academic disciplines will be benefited from the GPU accelerated parallel GAs, one of the promising area is to evolve better deep neural networks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.