Abstract

The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 multi-core processing nodes that will be extended incrementally following the increasing luminosity of the LHC to about 2000 nodes depending on the evolution of the processor technology. Due to the complexity and similarity of the algorithms a large fraction of the software is shared between the online and offline event reconstruction. The HLT Infrastructure serves as the interface between the two domains and provides common services for the trigger algorithms. The consequences of this design choice will be discussed and experiences from the operation of the ATLAS HLT during cosmic ray data taking and first beam in 2008 will be presented. Since the event processing time at the HLT is directly related to the number of processing nodes required, special emphasis has to be put on monitoring and improving the performance of the software. Both open-source as well as custom developed tools are used for this task and a few use-cases will be shown. Finally, the implications of the prevailing industry trend towards multi- and many-core processors for the architecture of the ATLAS HLT will be discussed. The use of multi-processing and multi-threading techniques within the current system will be presented. Several approaches to make optimal use of the increasing number of cores will be investigated and the practical implications of implementing each approach in the current system with hundreds of developers and several hundred thousand lines of code will be examined.

Highlights

  • A TLAS [1] is a general purpose detector built for collecting data at the Large Hadron Collider (LHC) at CERN [2]

  • After the initial Level-1 trigger (L1) selection, the event data from the various sub-detectors is held in separate memory buffers and waits for a decision to determine if it is assembled into a complete event for the final selection or discarded

  • 17 racks will be used as L2 processors and 28 racks can be freely configured as L2 or Event Filter (EF) nodes allowing for an optimal use of the available resources

Read more

Summary

INTRODUCTION

A TLAS [1] is a general purpose detector built for collecting data at the Large Hadron Collider (LHC) at CERN [2]. It covers a diversified physics program ranging from discovery physics to precision measurements of the Standard Model parameters and understanding the mechanism of electroweak symmetry breaking. In a first stage the LHC is expected to deliver colliding beams in 2009 with a center of mass energy of 10 TeV at a luminosity of 1031 cm−2 s−1

THE ATLAS TRIGGER
High Level Trigger Hardware
High Level Trigger Software Environment
Release building and validation
Performance Monitoring
Operational Experience
PARALLELISM AND MULTI-CORE CPUS
Findings
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call