Abstract

During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small “documents” using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These “files” can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.

Highlights

  • The CMS experiment is one of two general purpose detectors located at the LHC at CERN, Switzerland

  • The main reasons for this are the replacement of equipment at the end of its life cycle, the need to replace obsolete technologies, the ever-increasing CPU requirements of the High Level Trigger (HLT), demanding increased flexibility in integrating heterogeneous processor generations and architectures, and the requirement to accommodate new detector components and upgraded detector readouts exceeding the original Data Acquisition System (DAQ) specifications

  • Since Gigabit Ethernet switch ports were relatively inexpensive at the time the DAQ1 was designed, each processing node was connected to the event-builder network and ran the event building (BU) locally, i.e. it combined the Builder Unit (BU) and Filter Unit (FU) functionality in each box

Read more

Summary

Introduction

The CMS experiment is one of two general purpose detectors located at the LHC at CERN, Switzerland. Raw data for accepted events are read out and assembled in two steps, using a complex of switched networks connecting the readout boards to a cluster of commercial computers running Linux. The main reasons for this are the replacement of equipment at the end of its life cycle, the need to replace obsolete technologies, the ever-increasing CPU requirements of the HLT ( related to the anticipated performance of the machine), demanding increased flexibility in integrating heterogeneous processor generations and architectures, and the requirement to accommodate new detector components and upgraded detector readouts exceeding the original DAQ specifications. Since Gigabit Ethernet switch ports were relatively inexpensive at the time the DAQ1 was designed, each processing node was connected to the event-builder network and ran the event building (BU) locally, i.e. it combined the BU and FU functionality in each box. The system can remain evolutionary by allowing, e.g., to connect future highperformance very-many-cores systems to the event builder using 40GbE in a separate appliance

The Run 1 DAQ system will henceforth be referred to as “DAQ1”
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.