Abstract

The ATLAS collaboration started a process to understand the computing needs for the High Luminosity LHC era. Based on our best understanding of the computing model input parameters for the HL-LHC data taking conditions, results indicate the need for a larger amount of computational and storage resources with respect to the projection of constant yearly budget for computing in 2026. Filling the gap between the projection and the needs will be one of the challenges in preparation for HL-LHC. While the gains from improvements in offline software will play a crucial role in this process, a different model for data processing, management, access and bookkeeping should also be envisaged to optimise resource usage. In this contribution we will describe a straw man of this model, founded on basic principles such as single event level granularity for data processing and virtual data. We will explain how the current architecture will evolve adiabatically into the future distributed computing system, through the prototyping of building blocks that would be integrated in the production infrastructure as early as possible, so that specific use cases can be covered much earlier with respect to the HL-LHC time scale. We will also discuss how such system would adapt to and drive the evolution of the WLCG infrastructure in terms of facilities and services.

Highlights

  • The main HL-LHC will consist in storing and managing the required data volume (x7 larger w.r.t. “flat budget” extrapolaBon)

  • The Run-2 analysis model proves to be very effecBve in terms of analysis organizaBon and turnaround. It requires x2 less CPU resources w.r.t. the “disorganized” Run-1 model. It is very demanding in terms of storage: full DAOD size = full AOD size; 2 copies of the DAODs are stored on disk to facilitate analysis and ensure integrity, while AODs are archived on tape

  • The proposed HL-LHC analysis model is based on two key concepts: 1) Reproducibility: removes the need to store mulBple copies for integrity reasons

Read more

Summary

Introduction

The main HL-LHC will consist in storing and managing the required data volume (x7 larger w.r.t. “flat budget” extrapolaBon). Wenaus on behalf of the ATLAS Collaboration The main HL-LHC will consist in storing and managing the required data volume

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call