With the increasing fidelity and resolution enabled by high-performance computing systems, simulation-based scientific discovery is able to model and understand microscopic physical phenomena at a level that was not possible in the past. A grand challenge that the HPC community facing is how to maintain the large amounts of analysis data generated from simulations. In-memory computing, among others, is recognized to be a viable path forward and has experienced tremendous success in the past decade. Nevertheless, there has been a lack of a complete study and understanding of in-memory computing as a whole on HPC systems. Given the enlarging disparity between compute and HPC storage I/O, it is urgent for the HPC community to assess the state of in-memory computing and understand the challenges and opportunities. This paper presents a comprehensive study of in-memory computing with regard to its software evolution, performance, usability, robustness, and portability. In particular, we conduct an indepth analysis on the evolution of in-memory computing based upon more than 3,000 commits, and use realistic workflows for two scientific workloads, i.e., LAMMPS and Laplace to quantitatively assess state-of-the-art in-memory computing libraries, including DataSpaces, DIMES, Flexpath, Decaf and SENSEI on two leading supercomputers, Titan and Cori. Our studies not only illustrate the performance and scalability, but also reveal the key aspects that are of interest to library developers and users, including usability, robustness, portability, potential design defects, etc.
Read full abstract