Abstract

The ALICE experiment has undergone a major upgrade for LHC Run 3 and will collect data at an interaction rate 50 times larger than before. The new computing scheme for Run 3 replaces the traditionally separate online and offline frameworks by a unified one, which is called O2. Processing will happen in two phases. During data taking, a synchronous processing phase performs data compression, calibration, and quality control on the online computing farm. The output is stored on an onsite disk buffer. When there is no beam in the LHC, the same computing farm is used for the asynchronous reprocessing of the data which yields the final reconstruction output. The O2 project consists of three main parts:. The Event Processing Nodes (EPN) equipped with GPUs deliver the bulk of the computing capacity and perform the majority of the reconstruction and the calibration. The First Level Processors (FLP) receive the data via optical links from the detectors and perform local processing where it is needed, which can optionally happen in the user logic of the FPGA based readout card. Between the FLP and the EPN farms the data is distributed in the network such that the EPNs receive complete collision data for the processing. The Physics and Data Processing (PDP) group develops the software framework and the reconstruction and calibration algorithms. The current O2 setup is capable of handling in real time the peak data rate foreseen for data taking of Pb–Pb collisions at 50 kHz interaction rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call