Abstract

The data acquisition system of the ATLAS experiment, a major experiment of the Large Hadron Collider (LHC) at CERN, will go through a major upgrade in the next decade. The upgrade is driven by experimental physics requirements, calling for increased data rates on the order of 6 TB/s. By contrast, the data rate of the existing system is 160 GB/s. Among the changes in the upgraded system will be a very large buffer with a projected size on the order of 70 PB. The buffer role will be decoupling of data production from on-line data processing, storing data for periods of up to 24 hours until it can be analyzed by the event processing system.The larger buffer will allow a new data recording strategy, providing additional margins to handle variable data rates. At the same time it will provide sensible trade-offs between buffering space and on-line processing capabilities. This compromise between two resources will be possible since the data production cycle includes time periods where the experiment will not produce data.In this paper we analyze the consequences of such trade-offs, and introduce a tool that allows a detailed exploration of different strategies for resource provisioning. It is based on a model of the upgraded data acquisition system, implemented in a simulation framework. From this model it is possible to obtain insight into the dynamics of the running system. Given predefined resource constraints, we provide bounds for the provisioning of buffering space and on-line processing requirements.

Highlights

  • Colliding beam High Energy Particle Physics experiments study physical phenomena by measuring subatomic particles

  • The trigger system of the ATLAS data-acquisition system (DAQ) system [2, 4] is implemented in two stages: while the first stage consists of custom electronics with strict real-time requirements, the second stage is based on a general-purpose multicore computing system, interconnected using an Ethernet network

  • Running a new simulation that includes a calibration latency for the overhead of 1.7 ms having a total time of 21.7 ms instead of 20 ms does produce results which agree between simulation and the small-scale emulator

Read more

Summary

INTRODUCTION

Colliding beam High Energy Particle Physics experiments study physical phenomena by measuring subatomic particles. Because of the vasts amount of data involved, and because experiment goals are the study of new, never-examined before physical phenomena For this reason, data have to be recorded in permanent storage to allow scientists to iterate over the results. The trigger system of the ATLAS DAQ system [2, 4] is implemented in two stages: while the first stage consists of custom electronics with strict real-time requirements, the second stage is based on a general-purpose multicore computing system, interconnected using an Ethernet network. The future upgrade for the ATLAS experiment, so-called “Phase2”, will have to deal with an increase in maximum data rates on the order of 30x when compared to the existing system.

ATLAS Online Luminosity
BACKGROUND
The Interfill Period
ATLAS Upgrade Design
Other DAQ systems
SIMULATION MODEL CONSTRUCTION
Setting Parameters for Single Buffer and Split Buffer Simulation Models
Model implementation
SIMULATION MODEL VALIDATION USING OPERATIONAL DATA
VALIDATION OF THE STORAGE BUFFER
Small-scale Data-Acquisition Emulator
Single Execution of the Small-Scale DAQ System
Non-zero Overhead Latency
OPERATIONAL ENVELOPE
Data Production at Constant Rate
Data Production as a Cycle
Data Processing Variance
RELATED WORK
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call