Abstract

The CMS experiment will collect data from the proton-proton collisions delivered by the Large Hadron Collider (LHC) at a centre-of-mass energy up to 14 TeV. The CMS trigger system is designed to cope with unprecedented luminosities and LHC bunch-crossing rates up to 40 MHz. The unique CMS trigger architecture only employs two trigger levels. The Level-1 trigger is implemented using custom electronics, while the High Level Trigger (HLT) is based on software algorithms running on a large cluster of commercial processors, the Event Filter Farm. We present the major functionalities of the CMS High Level Trigger system as of the starting of LHC beams operations in September 2008. The validation of the HLT system in the online environment with Monte Carlo simulated data and its commissioning during cosmic rays data taking campaigns are discussed in detail. We conclude with the description of the HLT operations with the first circulating LHC beams before the incident occurred the 19th September 2008.

Highlights

  • The CMS detector [1, 2] is built and in its final commissioning phase, preparing to collect data from the proton-proton collisions to be delivered by the Large Hadron Collider (LHC), at a centre-of-mass energy of up to 14 TeV

  • The second trigger level, the High Level Trigger (HLT), provides further rate reduction by analyzing full-granularity detector data, using software reconstruction and filtering algorithms running on a large computing cluster consisting of commercial processors, the Event Filter Farm

  • Assuming a conservative factor of two in the average HLT processing time, corresponding to an average number of 10 events processed per second and per core, the CPU power currently available in the Filter Farm will be suitable for handling an HLT input rate as high as 60 kHz, beyond what is expected for the first year of LHC physics runs

Read more

Summary

Introduction

The CMS detector [1, 2] is built and in its final commissioning phase, preparing to collect data from the proton-proton collisions to be delivered by the Large Hadron Collider (LHC), at a centre-of-mass energy of up to 14 TeV. From the BU, the events are handed to the Filter Units (FU), the applications which run the actual High Level Trigger reconstruction and selection. A choice has been made to couple in the same node the application that buffers and formats the event data, the Builder Unit, and the actual execution of the HLT reconstruction and selection. A dedicated application, the Resource Broker (RB) takes care of exchanging data with the DAQ (requiring high bandwidth I/O), decoupling the execution of the physics algorithms (CPU intensive), performed by the Event Processors (EP), from data flow This allows each Filter Unit to continue operation, recover the content of the problematic events, and forward them to be stored unprocessed. Formatting and deploying of the configuration are decoupled from the database schema, allowing the target configuration grammar to evolve independently

Calibration and Condition Data
Data Storing and Transfer to Tier 0
Findings
HLT menus for cosmic ray data taking
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call