Abstract

The data acquisition (DAQ) system of the Compact Muon Solenoid (CMS) at CERN reads out the detector at the level-1 trigger accept rate of 100 kHz, assembles events with a bandwidth of 200 GB/s, provides these events to the high level-trigger running on a farm of about 30k cores and records the accepted events. Comprising custom-built and cutting edge commercial hardware and several 1000 instances of software applications, the DAQ system is complex in itself and failures cannot be completely excluded. Moreover, problems in the readout of the detectors,in the first level trigger system or in the high level trigger may provoke anomalous behaviour of the DAQ systemwhich sometimes cannot easily be differentiated from a problem in the DAQ system itself. In order to achieve high data taking efficiency with operators from the entire collaboration and without relying too heavily on the on-call experts, an expert system, the DAQ-Expert, has been developed that can pinpoint the source of most failures and give advice to the shift crew on how to recover in the quickest way. The DAQ-Expert constantly analyzes monitoring data from the DAQ system and the high level trigger by making use of logic modules written in Java that encapsulate the expert knowledge about potential operational problems. The results of the reasoning are presented to the operator in a web-based dashboard, may trigger sound alerts in the control room and are archived for post-mortem analysis - presented in a web-based timeline browser. We present the design of the DAQ-Expert and report on the operational experience since 2017, when it was first put into production.

Highlights

  • The Large Hadron Collider (LHC) at CERN typically provides collisions of protons or heavy nuclei around the clock on about 200 days per year

  • We present the design of the data acquisition (DAQ)-Expert and report on the operational experience since 2017, when it was first put into production

  • There remain certain types of data taking problems that cannot be recovered by the above automatisms because they affect multiple sub-systems or because recovery requires expert intervention. Most of these problems can be diagnosed to a certain level by examining the monitoring information from a few central systems: the Central Data Acquisition system (CDAQ), the Timing and Control Distribution System (TCDS) [9] and the high-level trigger running on the file-based filter farm (F3) [10]

Read more

Summary

Introduction

The Large Hadron Collider (LHC) at CERN typically provides collisions of protons or heavy nuclei around the clock on about 200 days per year. Recovery of frequent and well-known problems local to a single sub-system - such as single-event upsets in the electronics - has been automated using the so-called soft error recovery mechanism, which dramatically decreased down times since the end of Run-1 of the LHC [7] (46 hours of down time avoided in 2012 alone) Another layer, the Level-0 Automator [8] has been added: it allows operators to perform complex recovery actions with a single command. There remain certain types of data taking problems that cannot be recovered by the above automatisms because they affect multiple sub-systems or because recovery requires expert intervention Most of these problems can be diagnosed to a certain level by examining the monitoring information from a few central systems: the Central Data Acquisition system (CDAQ), the Timing and Control Distribution System (TCDS) [9] and the high-level trigger running on the file-based filter farm (F3) [10]. We report on this new tool and on operational experience in the present paper

The DAQ-Expert tool
The Snapshot Service
The Reasoning Service
Determining the root cause
Technologies used
Operational experience
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call