Abstract

The Data Acquisition (DAQ) and High Level Trigger (HLT) systems that served the ATLAS experiment during LHC's first run are being upgraded in the first long LHC shutdown period, from 2013 to 2015. This contribution describes the elements that are vital for the new interaction between the two systems. The central architectural enhancement is the fusion of the once separate Level 2, Event Building (EB), and Event Filter steps. Through the factorization of previously disperse functionality and better exploitation of caching mechanisms, the inherent simplification carries with it an increase in performance. Flexibility to different running conditions is improved by an automatic balance of formerly separate tasks. Incremental EB is the principle of the new Data Collection, whereby the HLT farm avoids duplicate requests to the detector Read-Out System (ROS) by preserving and reusing previously obtained data. Moreover, requests are packed and fetched together to avoid redundant trips to the ROS. Anticipated EB is activated when a large enough portion of the event is requested, reinforcing this effect. A new HLT Processing Unit exploits current architecture trends with a multiprocessing approach that is based on process forking, thereby bypassing thread-safety concerns, while containing total memory usage through the Operating System's Copy-On-Write feature. HLT and DAQ releases are decoupled by a flexible interface that allows quick updates of the communication between both sides, thus providing increased operational maneuvering. Finally, additional data are recorded through Data Scouting. A method of previewing properties of events whose frequency would otherwise exclude them, this new feature will provide key intelligence for subsequent trigger adjustments.

Highlights

  • A TLAS [2] is a particle physics experiment that relies on a general purpose detector for studying high-energy particle collisions at the Large Hadron Collider (LHC) [3]

  • Its most direct effects were on Data Acquisition (DAQ)/High Level Trigger (HLT) integration, concretely: the development of a new HLT Processing Units (HLTPUs) host software; the adaptation of the HLT Online Integration Framework; and the creation of the new tool athenaHLT, which corresponds to a reissue of the two previous tools athenaMT [8] – for Level 2 (L2) – and athenaPT [9] – for Event Filter (EF)

  • We effectively reduce the total memory footprint by making the HLTPU processes share large amounts of memory

Read more

Summary

INTRODUCTION

A TLAS [2] is a particle physics experiment that relies on a general purpose detector for studying high-energy particle collisions at the Large Hadron Collider (LHC) [3]. After a successful first physics run, the world’s most powerful particle accelerator is currently shutdown for planned maintenance and renovation efforts During this period, both the LHC machine and the experiments are preparing for an increase in luminosity from 8 × 1033cm−2s−1 to the nominal 1034cm−2s−1, and possibly beyond. Center-of-mass energy will increase from a previous maximum of 8 TeV to the design value of about 13 TeV, while the previous bunch spacing of 50ns will be reduced to 25ns These changes will mean a substantial increase in the rate of the data produced by the ATLAS detector in 2015, after the Long Shutdown 1 (LS1) [4] – as this period between 2013 and 2015 is known. This document first presents the central elements that are affected by the HLT/DAQ upgrade, before describing in more detail five improvements to the two subsystems that fundamentally affect the way they integrate

THE ATLAS TRIGGER AND DATA ACQUISITION
HLT merger
Incremental event building
A multiprocessing HLTPU
Flexible communication
Data scouting
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call