Dynamic programming is a technique widely used to solve several combinatory optimization problems. A well-known example is the minimum cost parenthesizing problem (MPP), which is usually used to represent a class of non-serial polyadic dynamic-programming problems. These problems are characterized by a strong dependency between subproblems. This paper outlines a coarse-grained multicomputer parallel solution using the four-splitting technique to solve the MPP. It is a partitioning technique consisting of subdividing the dependency graph into subgraphs (or blocks) of variable size and splitting large-size blocks into four subblocks to avoid communication overhead caused by a similar partitioning technique in the literature. Our solution consists in evaluating a block by computing and communicating each subblock of this block to reduce the latency time of processors which accounts for most of the global communication time. It requires O(n^3/p) execution time with O(k * \sqrt{p}) communication rounds. n is the input data size, p is the number of processors, and k is the number of times the size of blocks is subdivided.
Read full abstract