Abstract

The Large Hadron Collider (LHC) will enter a new phase beginning in 2027 with the upgrade to the High Luminosity LHC (HL-LHC). The increase in the number of simultaneous collisions coupled with a more complex structure of a single event will result in each LHC experiment collecting, storing, and processing exabytes of data per year. The amount of generated and/or collected data greatly outweighs the expected available computing resources. In this paper, we discuss effcient usage of HPC resources as a prerequisite for data-intensive science at exascale. First, we discuss the experience of porting CMS Hadron and Electromagnetic calorimeters reconstruction code to utilize Nvidia GPUs within the DEEP-EST project; second, we look at the tools and their adoption in order to perform benchmarking of a variety of resources available at HPC centers. Finally, we touch on one of the most important aspects of the future of HEP - how to handle the flow of petabytes of data to and from computing facilities, be it clouds or HPCs, for exascale data processing in a flexible, scalable and performant manner. These investigations are a key contribution to technical work within the HPC collaboration among CERN, SKA, GEANT and PRACE.

Highlights

  • The field of High-Performance Computing (HPC) is undergoing a transition to the major phase of its development, namely that of exascale computing

  • Accelerated processors like Graphical Processing Units (GPUs) or low-power ARM processors provide the bulk of the computing capacity at the majority of the largest

  • The DEEP-EST prototype consists of three types of compute modules: Cluster (CM), Extreme-scale Booster (ESB) and

Read more

Summary

Introduction

The field of High-Performance Computing (HPC) is undergoing a transition to the major phase of its development, namely that of exascale computing. HPC facilities rely heavily on heterogeneous hardware architectures. Accelerated processors like Graphical Processing Units (GPUs) or low-power ARM processors provide the bulk of the computing capacity at the majority of the largest. The use of heterogeneous architectures is both a challenge and an opportunity [1] for the High Energy Physics (HEP) community.

The DEEP-EST Project
Perform minimization of χ2 for the just computed correlation matrix
Containerized Benchmarking on HPC
Extending HEP benchmarking for HPC
HPC and Data Access
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call