Abstract

The ATLAS experiment relies heavily on simulated data, requiring the production of billions of simulated proton-proton collisions every run period. As such, the simulation of collisions (events) is the single biggest CPU resource consumer. ATLAS’s finite computing resources are at odds with the expected conditions during the High Luminosity LHC era, where the increase in proton-proton centre-of-mass energy and instantaneous luminosity will result in higher particle multiplicities and roughly five-fold additional interactions per bunch-crossing with respect to LHC Run-2. Therefore, significant effort within the collaboration is being focused on increasing the rate at which Monte Carlo events can be produced by designing and developing fast alternatives to the algorithms used in the standard Monte Carlo production chain.

Highlights

  • In order to enable the ATLAS collaboration to pursue its ambitious physics research program, very large simulated collision event samples are required

  • Significant effort within the collaboration is being focused on increasing the rate at which Monte Carlo events can be produced by designing and developing fast alternatives to the algorithms used in the standard Monte Carlo production chain

  • The need for simulated events increases with the number of proton-proton collision data events collected from the ATLAS experiment [1] at the Large Hadron Collider (LHC) [2]

Read more

Summary

Introduction

In order to enable the ATLAS collaboration to pursue its ambitious physics research program, very large simulated collision event samples are required. The problem of creating samples of sufficient size will be crucial during the Run 3 data taking starting in 2022 and even more essential at the High Luminosity LHC, from 2027. These samples are produced with the ATLAS Monte Carlo (MC) chain which consists of different production steps, each providing a different output format: event generation (EVNT), detector simulation (HITS), digitization (RDO), reconstruction (AOD) and derivation (DAOD). Once very fast detector simulation is used, digitization and reconstruction become the dominant consumers of CPU time in the MC production chain. The last two Sections of this paper describe the tools currently used to monitor the performance of the Fast Chain and the continuous integration with the main ATLAS software framework (Athena) [12]

RDO-overlay
Track-overlay
Integration of ACTS
Parametrization of nuclear interactions
Fast digitization of the silicon detector
Fast Track Reconstruction
Daily ART tests
Future Plans
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call