Abstract

Iterative method are one of the most efficient numerical algorithms to solve large-scale sparse linear systems arising from scientific computing.The parallel scalability of iterative method can be measured by the communication-to-computation(CtC) during the iterative process.The CtC is high for many iterative methods,as a result,coarse-grain parallelism is needed in order to obtain expected scalability.However,fine-grain parallelism is required for the complicacy of architecture increasing with multi/many-cores.In this paper,we introduce the concept of asymptotic size,which defined as the low bound of the problem size such that satisfying the speedup condition that parallel speedup is more than 1.We hope the asymptotic size can be used to describe the CtC and the ability of fine-grain parallelism for iterative method.Moreover,the theoretical prediction formula of asymptotic size is obtained based on the following parameters: the sparse and communication pattern of matrix,communication parameters of machine,and combination of basic operations of iterative methods.Using asymptotic size,the CtC is analyzed for three popular iterative methods,including Jacobi,CG,BiCGSTAB,on a MPP machine with 128 Double Quad-core nodes.The performance results are given for both MPI-Only and MPI/OpenMP programming model,which show the usefulness of the asymptotic size for describing the CtC of iterative methods.For MPI-Only case,we also give the comparison of the prediction results and experiment results,which show the validation of the formula of asymptotic size.Finally,future research topics for improving the scalability of the iterative methods on more power supercomputers also discussed based on the analysis conclusions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call