Abstract

Hierarchical matrices are scalable matrix representations particularly suited to the case where the matrix entries are defined by a smooth kernel function evaluated between pairs of points. In this paper, we present a new scheme to alleviate the computational bottlenecks present in many hierarchical matrix methods. For general kernel functions, a popular approach to construct hierarchical matrices is through interpolation, due to its efficiency compared to computationally expensive algebraic techniques. However, interpolation-based methods often lead to larger ranks, and do not scale well to higher dimensions. We propose a new data-driven method to resolve these issues. The new method is able to accomplish the rank reduction by using a surrogate for the global distribution of points. The surrogate is generated using a hierarchical data-driven sampling. As a result of the lower rank, the construction cost, memory requirements, and matrix-vector product costs decrease. Using state-of-theart dimension independent sampling, the new method makes it possible to tackle problems in higher dimensions. We also discuss an on-the-fly variation of hierarchical matrix construction and matrix-vector products that is able to reduce memory usage by an order of magnitude. This is accomplished by postponing the generation of certain intermediate matrices until they are used, generating them just in time. We provide results demonstrating the effectiveness of our improvements, both individually and in conjunction with each other. For a problem involving 320,000 points in 3D, our data-driven approach reduces the memory usage from 58.75 GiB using state-of-the-art methods (762.9 GiB if stored dense) to 18.60 GiB. In combination with our on-thefly approach, we are able to reduce the total memory usage to 543.74 MiB.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call