Abstract
Applications typically have several potential sources of parallelism, and in choosing a particular parallelization, the programmer must balance the benefits of each source of parallelism with the corresponding overhead. The trade-offs are often difficult to analyze, as they may depend on the hardware architecture, software environment, input data, and properties of the algorithm. An example of this dilemma occurs in a wide range of problems that involve processing trees, wherein processors can be assigned either to separate subtrees, or to parallelizing the work performed on individual tree nodes. We explore the complexity of the trade-offs involved in this decision by considering alternative parallelizations of combinatorial search, examining the factors that determine the best-performing implementation for this important class of problems. Using subgraph isomorphism as a representative search problem, we show how the density of the solution space, the number of solutions desired, the number of available processors, and the underlying architecture all affect the choice of an efficient parallelization. Our experiments, which span seven different shared-memory multiprocessors and a wide range of input graphs, indicate that relative performance depends on each of these factors. On some machines and for some inputs, a sequential depth-first search of the solution space, applying simple loop-level parallelism at each node in the search tree, performs best. On other machines or other inputs, parallel tree search performs best. In still other cases, a hybrid solution, containing both parallel tree search and loop parallelism, works best. We present a quantitative analysis that explains these results and present experimental data culled from thousands of program executions that validates the analysis. From these experiences we conclude that there is no one "best" parallelization that suffices over a range of machines, inputs, and precise problem specifications. As a corollary, we provide quantitative evidence that programming environments and languages should not focus exclusively on flat data parallelism, since nested parallelism or hybrid forms of parallelism may be required for an efficient implementation of some applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.