Abstract
The ALICE Experiment at CERN LHC (Large Hadron Collider) is undertaking a major upgrade during LHC Long Shutdown 2 in 2019–2021. The raw data input from the ALICE detectors will then increase a hundredfold, up to 3.5 TB/s. In order to cope with such a large amount of data, a new online-offline computing system, called O2, will be deployed.One of the key software components of the O2system will be the data Quality Control (QC) that replaces the existing online Data Quality Monitoring and offline Quality Assurance. It involves the gathering, the analysis by user-defined algorithms and the visualization of monitored data, in both the synchronous and asynchronous parts of the O2system.This paper presents the architecture and design, as well as the latest and upcoming features, of the ALICE O2QC. The results of the extensive benchmarks which have been carried out for each component of the system are later summarized. Finally, the adoption of this tool amongst the ALICE Collaboration and the measures taken to develop, in synergy with their respective teams, efficient monitoring modules for the detectors, are discussed.
Highlights
ALICE [1] is the detector designed to cope with the high particle multiplicities produced in heavy-ion collisions at the CERN LHC to study the physics of strongly interacting matter and the quark–gluon plasma [2]
The physics topics addressed by the ALICE upgrade [3] are characterized by a low signal to noise ratio making triggering techniques very inefficient
The implementation of a continuous readout is required by the Time Projection Chamber in order to keep up with the 50 kHz interaction rate
Summary
ALICE [1] is the detector designed to cope with the high particle multiplicities produced in heavy-ion collisions at the CERN LHC to study the physics of strongly interacting matter and the quark–gluon plasma [2]. In order to cope with such requirements, a new Online and Offline Computing system, called O2 [4], has been developed. It is characterized by the continuous readout of all interactions, their compression by means of partial online reconstruction and calibration and the sharing of common computing resources during and after data taking. In this scheme, only the reconstructed data is written to disk while the raw data is discarded. A Data Processing Layer [5] software framework is being developed on top of the FairMQ data transport layer [6]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.