Abstract

The High Luminosity LHC (HL-LHC) will start operating in 2027 after the third Long Shutdown (LS3), and is designed to provide an ultimate instantaneous luminosity of 7:5 × 1034cm−2s−1, at the price of extreme pileup of up to 200 interactions per crossing. The number of overlapping interactions in HL-LHC collisions, their density, and the resulting intense radiation environment, warrant an almost complete upgrade of the CMS detector. The upgraded CMS detector will be read out by approximately fifty thousand highspeed front-end optical links at an unprecedented data rate of up to 80 Tb/s, for an average expected total event size of approximately 8 − 10 MB. Following the present established design, the CMS trigger and data acquisition system will continue to feature two trigger levels, with only one synchronous hardware-based Level-1 Trigger (L1), consisting of custom electronic boards and operating on dedicated data streams, and a second level, the High Level Trigger (HLT), using software algorithms running asynchronously on standard processors and making use of the full detector data to select events for offline storage and analysis. The upgraded CMS data acquisition system will collect data fragments for Level-1 accepted events from the detector back-end modules at a rate up to 750 kHz, aggregate fragments corresponding to individual Level- 1 accepts into events, and distribute them to the HLT processors where they will be filtered further. Events accepted by the HLT will be stored permanently at a rate of up to 7.5 kHz. This paper describes the baseline design of the DAQ and HLT systems for the Phase-2 of CMS.

Highlights

  • This paper describes the baseline design of the Data Acquisition system (DAQ) and High Level Trigger (HLT) systems for the Phase-2 of CMS

  • The main purpose of the Data Acquisition system (DAQ) of a collider experiment is to provide the data pathway and time decoupling between the synchronous detector readout and data reduction, the asynchronous selection of interesting events in the software trigger level, and their permanent storage for offline analysis

  • Data for Level-1 accepted events will be collected, for each Advanced Telecommunications Computing Architecture (ATCA) crate, by one or more DTH/DAQ boards connected to adjacent boards via dedicated front-panel high-speed optical connections

Read more

Summary

Introduction

The main purpose of the Data Acquisition system (DAQ) of a collider experiment is to provide the data pathway and time decoupling between the synchronous detector readout and data reduction, the asynchronous selection of interesting events in the software trigger level, and their permanent storage for offline analysis. The uplinks transport digitized detector data from the FE to the BE modular electronics used to pre-process them, route the relevant portion to the hardware trigger processors, and the full resolution data to the DAQ system This system is deadtime-less, in the sense that pipelines at every stage of the synchronous process are sized in such a way as to store data for the maximum latency required to decide whether to pass them on or drop them, under normal conditions, without losses. The local storage decouples the data acquisition from the transfer process and must provide enough buffer to absorb fluctuations in the transfer speed and to enable uninterrupted data taking in case of outage of the transfer link This is realized using distributed or network storage attached to the same switched network used for event building. Entire data-set files are transferred to central computing resources (Tier-0, located in the CERN computing center, some 10 km away) over long-distance links, to be stored for the subsequent offline reconstruction, which typically happens within a few days of the actual data-taking

The Phase-2 CMS Upgrade
Baseline Architecture of the DAQ Upgrade
Timing and Trigger Control and Distribution System
Data To Surface
Event Building
HLT Data Distribution and Collection
Co-processors for the HLT
Storage System
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call