Abstract

ATLAS is a Physics experiment that explores high-energy particle collisions at the Large Hadron Collider at CERN. It uses tens of millions of electronics channels to capture the outcome of the particle bunches crossing each other every 25 ns. Since reading out and storing the complete information is not feasible (∼100 TB/s), ATLAS makes use of a complex and highly distributed Trigger and Data Acquisition (TDAQ) system, in charge of selecting only interesting data and transporting those to permanent mass storage (∼1 GB/s) for later analysis. The data reduction is carried out in two stages: first, custom electronics performs an initial level of data rejection for each bunch crossing based on partial and localized information. Only data corresponding to collisions passing this stage of selection will be actually read-out from the on-detector electronics. Then, a large computer farm (∼17 k cores) analyses these data in real-time and decides which ones are worth being stored for Physics analysis. A large network allows moving the data from ∼2000 front-end buffers to the location where they are processed and from there to mass storage. The overall TDAQ system is embedded in a common software framework that allows controlling, configuring and monitoring the data taking process. The experience gained during the first period of data taking of the ATLAS experiment (Run I, 2010-2012) has inspired a number of ideas for improvement of the TDAQ system that are being put in place during the so-called Long Shutdown 1 of the Large Hadron Collider (LHC), in 2013/14. This paper summarizes the main changes that have been applied to the ATLAS TDAQ system and highlights the expected performance and functional improvements that will be available for the LHC Run II. Particular emphasis will be put on the evolution of the software-based data selection and of the flow of data in the system. The reasons for the modified architectural and technical choices will be explained, and details will be provided on the simulation and testing approach used to validate this system.

Highlights

  • ATLAS (A Toroidal LHC ApparatuS) is a multipurpose particle detector located in the Large Hadron Collider (LHC) at CERN [1]

  • This was possible by tuning the traffic shaping mechanism in the Data Collection Managers (DCM) that controls the number of event fragments that are consecutively requested to the Read-Out-System

  • The right functioning of the Trigger and Data Acquisition (TDAQ) system has a direct impact on the operation of the ATLAS experiment and the achievement of its Physics goals

Read more

Summary

Introduction

ATLAS (A Toroidal LHC ApparatuS) is a multipurpose particle detector located in the Large Hadron Collider (LHC) at CERN [1]. From that moment until the beginning of 2013, the ATLAS detector operated successfully recording proton-proton collisions at a center of mass energy of 8 TeV. In 2013, the Long Shutdown 1 period, or LS1, was scheduled in order to do the necessary maintenance and upgrade operations on the different systems of the apparatus. The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector experienced important changes some of which have been described in [2] and [3]. In the first quarter of 2015, a new period named Run 2 will begin in the LHC, with collisions expected at a center of mass energy of 13 TeV, inaugurating a new era for the HEP experiments

The Data Flow upgrade
The Data Collection Network
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.