Abstract

This paper presents a first solution to the unsolved problem of symbolically scheduling a given loop nest with uniform data dependences using inner loop parallelization, in particular, the locally parallel, globally sequential (LPGS) mapping technique. This technique is needed in the case of loop program specifications for which the iterations shall be scheduled on a processor array of unknown size at compile time while keeping the local memory consumption independent of the problem size of the mapped loop nest. We show that it is possible to derive such parameterized LPGS schedules statically by proposing a mixed compile-/runtime approach: At compile time, we first determine the set of all schedule candidates, each latency-optimal for a different scanning order of the loop nest. Then we devise an exact parameterized formula for determining the latency of the resulting symbolic schedules, thus making each schedule fully predictable. At runtime, once the size of the processor array becomes known, a simple prolog selects the overall latency-optimal schedule that is then dynamically activated and executed on the processor array. Hence, our approach avoids any further runtime optimization and expensive re-compilations while achieving the same results as computing an optimal static schedule for each possible combination of array and problem size.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call