Abstract

This chapter focuses on parallelization of loops. Parallelization creates a new concern for the programmer—equitable distribution of algorithm work across all threads. This distribution—load balance—strives to spread the work evenly so that all threads take roughly the same amount of time to execute and no thread gets stuck with the bulk of the compute work. Load balancing is an important concern. Proper load balancing of a problem situation can make the difference between modest performance gains and maximal performance gains. It can make all the difference in the performance of a program. Load balancing to maximize performance is a part of practical parallel programming. The chapter describes load balancing in general terms and illustrates examples of problem situations and their solutions. The examples of load imbalance and their solutions are ideal only because system overhead is not factored in to execution performance. Load imbalance steals performance and loop scheduling techniques counteract the imbalance and restore performance. Scheduling describes how the individual iterations are grouped for execution by a limited number of threads.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.