Abstract

Today's large-scale science projects have always encountered challenges in processing large data flow from the experiments, the ATLAS detector records proton-proton collisions provided by the Large Hadron Collider (LHC) at CERN every 50 ns which results in a total data flow of 10 Pb/s. These data must be reduced to the science data product for further analysis, thus a very fast decisions need to be executed, to modify this large amounts of data at high rates. The capabilities required to support this scale of data movement is development and improvement of high-throughput electronics. The upgraded LHC will provide collisions at rates that will be at least 10 times higher than those of today due to it's luminosity by 2022. This will require a complete redesign of the read-out electronics and Processing Units (PU) in the Tile-calorimeter (TileCal) of the ATLAS experiment. A general purpose, high-throughput PU has been developed for the TileCal at CERN, by using several ARM-processors in cluster configuration. The PU is capable of handling large data throughput and apply advanced operations at high rates. This system has been proposed for the fixed target experiment at NICA complex to handle the first level processes and event building. The aim of this work is to have a look at the architecture of the data acquisition system (DAQ) of the fixed target experiment at the NICA complex at JINR, by compiling the data-flow requirements of all the subcomponents. Furthermore, the VME DAQ modules characteristics to control, triggering and data acquisition will be described in order to define the DAQ with maximum readout efficiency, no dead time and data selection and compression.

Highlights

  • High energy physics experiments are confronted with the challenge of handling the volume of data recorded by particle detectors

  • One of such detector is the ATLAS Detector of the Large Hadron Collider(LHC), which records proton collision at about 50ns resulting to data output rate of about 10 Pb/s, with the anticipated upgrade of the detector to about ten times its previous recording energy which is about 6.5TeV (13 TeV collision energy), the data output rate would increase [1]

  • The TileCal detector which is a sub-detector of the ATLAS detector used to measure energy and position of hadrons [2], the scintillating tiles of the detector produce light when particles from collision crosses them, light are converted to analogue signal by the photo-multipliers and to digital signals by digitizers

Read more

Summary

Introduction

1. Introduction High energy physics experiments are confronted with the challenge of handling the volume of data recorded by particle detectors. The proposed particle detectors (the BM@N detector Fig. 1) will be used to record particle produced by the fixed target experiment, by combining high precision track measurement with time-of-flight information for particle identification and total energy measurement for event characterization. These signals are sent to the data acquisition (DAQ) system for digitization, compression and selection by high-throughput electronics (PU).

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call