Abstract

ALICE is one of the four major LHC experiments at CERN. When the accelerator enters the Run 3 data-taking period, starting in 2021, ALICE expects almost 100 times more Pb-Pb central collisions than now, resulting in a large increase of data throughput. In order to cope with this new challenge, the collaboration had to extensively rethink the whole data processing chain, with a tighter integration between Online and Offline computing worlds. Such a system, code-named ALICE O2, is being developed in collaboration with the FAIR experiments at GSI. It is based on the ALFA framework which provides a generalized implementation of the ALICE High Level Trigger approach, designed around distributed software entities coordinating and communicating via message passing. We will highlight our efforts to integrate ALFA within the ALICE O2 environment. We analyze the challenges arising from the different running environments for production and development, and conclude on requirements for a flexible and modular software framework. In particular we will present the ALICE O2 Data Processing Layer which deals with ALICE specific requirements in terms of Data Model. The main goal is to reduce the complexity of development of algorithms and managing a distributed system, and by that leading to a significant simplification for the large majority of the ALICE users.

Highlights

  • The Large Hadron Collider (LHC) will undergo a major technical stop between 2018 and 2020, which will result in five fold increase in the heavy ion rate that will go from the current 10kHz to the planned 50kHz

  • Due to this upgrade the ALICE experiment [1], which of the four major experiments at the LHC is the one focused on the heavy-ion physics, will undergo a similar major upgrade to cope with increased number of collisions [2]

  • A major difference from the current setup is that, because of the high latency of gas detectors like the Time Projection Chamber (TPC), it will be impossible to sustain the current (O(1kHz)) triggered mode, instead data will be collected with continuous readout mode

Read more

Summary

ALICE Experiment in Run 3

The Large Hadron Collider (LHC) will undergo a major technical stop between 2018 and 2020, which will result in five fold increase in the heavy ion rate that will go from the current 10kHz to the planned 50kHz. A major difference from the current setup is that, because of the high latency of gas detectors like the Time Projection Chamber (TPC), it will be impossible to sustain the current (O(1kHz)) triggered mode, instead data will be collected with continuous readout mode. Each timeframe will be in average 10 GB in size and each EPN is expected to perform reconstruction on it and use the derived quantities (e.g. track parameters) as a mean to compress raw data related information and reduce the size of each “compressed timeframe” to an average of 2 GB with an aggregate rate of 100 GB/s to persistent storage. A later asynchronous (to data taking) step will use the EPN farm to reprocess all the data taken in the synchronous processing, using final calibrations and reconstructing the part of the detectors that can afford late reconstruction. A major difference between Run 3 and the Run 1–Run 2 data processing is the blending of traditional roles of Online and Offline, which will share the exact same algorithms

ALICE O2 Framework
The Transport Layer
The O2 Data Model
The Data Processing Layer
Processing inside the DPL
Examples of using the DPL
Conclusions and future work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call