Abstract

A very important component of a parallel system that generates irregular computational patterns is its work distribution strategy Scheduling strategies for these kinds of systems must be smart enough in order to dynamically balance workload while not incurring a very high overhead. Logic programs running on parallel logic programming systems are examples of irregular parallel computations. The two main forms of parallelism exploited by parallel logic programming systems are: and parallelism, that arises when several literals in the body of a clause can execute in parallel, and or-parallelism, that arises when several alternative clauses in the data base program can be selected in parallel. In this work we show that scheduling strategies for distributing and work and or-work in parallel logic programming systems must combine information obtained at compile-time with runtime information whenever possible, in order to obtain better performance. The information obtained at compile-time has two advantages over current implemented systems that use only runtime information: (1) the user does not need to adjust parameters in arder to estimate sizes of and-work and or-work for the programs; (2) the schedulers can use more accuratc estimates of sizes of and-work and or-work to make belter decisions at runtime. We did our experiments with Andorra-I, a parallel logic programming system that exploits both determinare and-parallelism and or-parallelism. In order to obtain compile-time granularity information we used the ORCA tool. Our benchmark set ranges from programs containing and parallelism only, or-parallelism only and a combination of both and-, and or-parallelism. Our results show that, when well designed, scheduling strategies can actually benefit from compile-time granularity information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call