Abstract

The LHCb experiment at the LHC accelerator at CERN collects collisions of particle bunches at 40 MHz. After a first level of hardware trigger with an output rate of 1 MHz, the physically interesting collisions are selected by running dedicated trigger algorithms in the High Level Trigger (HLT) computing farm. This farm consists of up to roughly 25000 CPU cores in roughly 1750 physical nodes each equipped with up to 4 TB local storage space. This work describes the LHCb online system with an emphasis on the developments implemented during the current long shutdown (LS1). We will elaborate the architecture to treble the available CPU power of the HLT farm and the technicalities to determine and verify precise calibration and alignment constants which are fed to the HLT event selection procedure. We will describe how the constants are fed into a two stage HLT event selection facility using extensively the local disk buffering capabilities on the worker nodes. With the installed disk buffers, the CPU resources can be used during periods of up to ten days without beams. These periods in the past accounted to more than 70% of the total time.

Highlights

  • LHCb is a dedicated B-physics experiment at the LHC collider at CERN [1]

  • The Buffer Manager Concept In the LHCb online system all event handling activities are separated into independent, asynchronously executing processes to derandomize the flow of events

  • HLT1 and HLT2 use the same calibration constants for the event selection which later are used for offline data processing

Read more

Summary

Introduction

LHCb is a dedicated B-physics experiment at the LHC collider at CERN [1]. The LHCb detector was designed to record proton-proton collisions at a rate up to 40 MHz delivered by LHC at a center of mass energy up to 14 TeV. The Buffer Manager Concept In the LHCb online system all event handling activities are separated into independent, asynchronously executing processes to derandomize the flow of events This includes all readout functions such as the assembly of events from fragments sent by the TELL1 boards, event filtering or the transport of accepted events to the long term storage. To not overflow the local disk buffer of the worker node (see Figure 4) the stored data is reduced by the continuously running second stage of the event filter process, HLT2, which execute more sophisticated algorithms and requires more CPU resources - though at lower priority. A palette of such configurations allows to quickly reconfigure the processes on each worker node Both activities, HLT1 and HLT2 use the same calibration constants for the event selection which later are used for offline data processing. They accept transition requests from SMI++ and publish their state to SMI++ using the DIM protocol [12]

Data Monitoring The HLT1 and HLT2 activity differ in nature
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call