Abstract

Summary form only given. The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a μTCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gb/s Infiniband FDR Clos network has been chosen for the event builder with a throughput of ~4 Tb/s. The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. This paper presents the requirements, technical choices, and performance of the new system.

Highlights

  • The fully built events are sent to the high level trigger (HLT) with 1/10/40 Gb/s Ethernet via network file systems

  • THE Compact Muon Solenoid (CMS) experiment [1, 2] at CERN’s Large Hadron collider is one of two large general-purpose experiments exploring a wide range of physics at the TeV scale

  • At a nominal event size of 1 MB, the CMS data acquisition system (DAQ) system for Run-1 was designed to handle a throughput of 100 GB/s, making it the highest throughput DAQ system in high-energy physics to date

Read more

Summary

INTRODUCTION

THE Compact Muon Solenoid (CMS) experiment [1, 2] at CERN’s Large Hadron collider is one of two large general-purpose experiments exploring a wide range of physics at the TeV scale. At a nominal event size of 1 MB, the CMS DAQ system for Run-1 was designed to handle a throughput of 100 GB/s, making it the highest throughput DAQ system in high-energy physics to date. In this paper we present the final design of the new DAQ system that is currently being installed, and report on performance measurements including first measurements in the production system. In each of the sections we report on the requirements, design and on latest performance measurements

MAIN DESIGN PARAMETERS
Clos network of 18 topology switches
THE NEW FRONT-END-READOUT OPTICAL LINK
THE CORE EVENT BUILDER
VIII. STATUS AND OUTLOOK
Findings
DATA COLLECTION AND STORAGE

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.