Many graph-based algorithms in high performance computing (HPC) use approximate solutions due to having algorithms that are computationally expensive or serial in nature. Neural acceleration, i.e., the process of speeding up approximation computation elements via artificial neural networks, is relatively new and has not focused on HPC graph-based algorithms. In this paper, we propose a starting point for applying models for neural acceleration to graph-based HPC algorithms utilizing an understanding of the connectivity computational pattern, recursive neural networks, and graph neural networks. We demonstrate these techniques on the problem related to the utility functions of sparse matrix ordering and fill-in (i.e., zero elements becoming nonzero during factorization) calculations. The problem of sparse matrix ordering is commonly used for issues related to load balancing, improving memory reuse, or reducing computational and memory costs in direct sparse linear solver methods. These utility functions are ideal for demonstration as they comprise a number of different graph-based subproblems, and thus demonstrate the usefulness of our method over a wide range. We show that we can accurately approximate the best ordering and the nonzero count of the sparse factorization matrix while speeding up the calculation by as much as 30.3× over the traditional serial method.
Read full abstract