Abstract

Data acquisition systems are a key component for successful data taking in any experiment. The DAQ is a complex distributed computing system and coordinates all operations, from the data selection stage of interesting events to storage elements. For the High Luminosity upgrade of the Large Hadron Collider, the experiments at CERN need to meet challenging requirements to record data with a much higher occupancy in the detectors. The DAQ system will receive and deliver data with a significantly increased trigger rate, one million events per second, and capacity, terabytes of data per second. An effective way to meet these requirements is to decouple real-time data acquisition from event selection. Data fragments can be temporarily stored in a large distributed key-value store. Fragments belonging to the same event can be then queried on demand, by the data selection processes. Implementing such a model relies on a proper combination of emerging technologies, such as persistent memory, NVMe SSDs, scalable networking, and data structures, as well as high performance, scalable software. In this paper, we present DAQDB (Data Acquisition Database) — an open source implementation of this design that was presented earlier, with an extensive evaluation of this approach, from the single node to the distributed performance. Furthermore, we complement our study with a description of the challenges faced and the lessons learned while integrating DAQDB with the existing software framework of the ATLAS experiment.

Highlights

  • Decoupling real-time data acquisition from event selection is a promising approach for future data acquistion (DAQ) systems of large-scale experiments like the Large Hadron Collider (LHC) at CERN

  • In effect, processing timeouts can be less strict, sensor calibration can be improved implying a purer selection of events, and the use of computing resources can be optimized by designing for the average instead of the peak data rate. These factors can be important for the High Luminosity upgrades of LHC, for which the DAQ systems will need to digest a few terabytes of data each second

  • It is equipped with 3 TB of Intel® OptaneTM persistent memory (PMem) 100 series and 96 GB of DDDR4 memory per each CPU node, but Data Acquisition Database (DAQDB) is evaluated using only one CPU to avoid cross CPU effects

Read more

Summary

Introduction

Decoupling real-time data acquisition from event selection is a promising approach for future data acquistion (DAQ) systems of large-scale experiments like the Large Hadron Collider (LHC) at CERN. In this paper we present the first performance evaluation of our open-source implementation of this design — Data Acquisition Database (DAQDB) [2]. In effect, processing timeouts can be less strict, sensor calibration can be improved implying a purer selection of events, and the use of computing resources can be optimized by designing for the average instead of the peak data rate. These factors can be important for the High Luminosity upgrades of LHC, for which the DAQ systems will need to digest a few terabytes of data each second.

Logical event building with DAQDB
DAQDB evaluation
Evaluation configurations
Multi-core scalability
Dependence on value size
Integration with ATLAS TDAQ framework
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call