Abstract

ALICE, the general purpose, heavy ion collision detector at the CERN LHC is designed to study the physics of strongly interacting matter using proton-proton, nucleus-nucleus and proton-nucleus collisions at high energies. The ALICE experiment will be upgraded during the Long Shutdown 2 in order to exploit the full scientific potential of the future LHC. The requirements will then be significantly different from the original design of the experiment and will require major changes to the detector read-out. The main physics topics addressed by the ALICE upgrade are characterised by rare processes with a very small signal-to-background ratio, requiring very large statistics of fully reconstructed events. In order to keep up with the 50 kHz interaction rate, the upgraded detectors will be read out continuously. However, triggered readout will be used by some detectors and for commissioning and some calibration runs. The total data volume collected from the detectors will increase significantly reaching a sustained data throughput of up to 3 TB/s with the zero-suppression of the TPC data performed after the data transfer to the detector read-out system. A flexible mechanism of bandwidth throttling will allow the system to gracefully degrade the effective rate of recorded interactions in case of saturation of the computing system. This paper includes a summary of these updated requirements and presents a refined design of the detector read-out and of the interface with the detectors and the online systems. It also elaborates on the system behaviour in continuous and triggered readout and defines ways to throttle the data read-out in both cases.

Highlights

  • The ALICE experiment will be upgraded during the Long Shutdown 2 in order to exploit the full scientific potential of the future LHC

  • The total data volume collected from the detectors will increase significantly reaching a sustained data throughput of up to 3 TB/s with the zero-suppression of the Time Projection Chamber (TPC) data performed after the data transfer to the detector read-out system

  • The data produced by the Front-End Cards (FEC) are transferred to the Common Read-out Units (CRU) which are the interfaces to a first farm of computers: the First-Level Processors (FLP) where an initial data volume reduction is performed

Read more

Summary

Introduction

The ALICE experiment [1] will be upgraded during the Long Shutdown 2 (LS2) in 2019-20 in order to exploit the full scientific potential of the future LHC. A flexible mechanism of bandwidth throttling has been introduced to gracefully degrade the effective rate of recorded interactions in case of saturation of the computing system This mechanism measures the saturation of the read-out system and discards a fraction of the data according to a predefined policy. The data produced by the Front-End Cards (FEC) are transferred to the Common Read-out Units (CRU) which are the interfaces to a first farm of computers: the First-Level Processors (FLP) where an initial data volume reduction is performed. Data produced by the detectors FECs are transferred to the read-out cards (CRU or CRORC) in a continuous or triggered read-out mode over the GBT [4] or DDL [5] based read-out links. Several streams of HBFs may be aggregated on each FLP and buffered in memory

FFFEEEEEC
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call