Abstract

The ATLAS Distributed Computing infrastructure has evolved after the first period of LHC data taking in order to cope with the challenges of the upcoming LHC Run-2. An increase in both the data rate and the computing demands of the Monte-Carlo simulation, as well as new approaches to ATLAS analysis, dictated a more dynamic workload management system (Prodsys-2) and data management system (Rucio), overcoming the boundaries imposed by the design of the old computing model. In particular, the commissioning of new central computing system components was the core part of the migration toward a flexible computing model. A flexible computing utilization exploring the use of opportunistic resources such as HPC, cloud, and volunteer computing is embedded in the new computing model; the data access mechanisms have been enhanced with the remote access, and the network topology and performance is deeply integrated into the core of the system. Moreover, a new data management strategy, based on a defined lifetime for each dataset, has been defined to better manage the lifecycle of the data. In this note, an overview of an operational experience of the new system and its evolution is presented.

Highlights

  • The LHC accelerator has been upgraded in 2013 and 2014 to be able to reach its higher energy (6.5 TeV per proton beam) and higher luminosity

  • The Analysis Model ATLAS analysis for Run-2 will be based on the xAOD format, which is readable both from the offline framework (Athena) and ROOT

  • The ATLAS distributed computing project carried out an ambitious upgrade program in preparation for LHC Run-2

Read more

Summary

Introduction

The LHC accelerator has been upgraded in 2013 and 2014 to be able to reach its higher energy (6.5 TeV per proton beam) and higher luminosity This will bring new computing challenges for the ATLAS experiment, as it will operate at more than double the trigger rate (which implies more event to process by a factor two) and the events will be more complex. The computing resources will not considerably increase: a growth of computing power proportional to Moore’s law, i.e. 20% every year is expected, rather than linear scaling with the number of events to be processed In this contribution we will describe the distributed computing system and model that ATLAS will adopt in the coming LHC run and how such system will face the challenges described above. That in order to face the Run-2 challenges major improvements were needed and for many services this ended up in a complete re-design

Distributed Data Management system
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call