Abstract

Data Acquisition (DAQ) systems for large high-energy physics (HEP) experiments in the eighties were designed to handle data rates of megabytes per second. The next generation of HEP experiments at CERN (European Laboratory for High Energy Physics), is being designed around the new Large Hadron Collider (LHC) project, and will have to cope with gigabyte-per-second data flows. As a consequence, LHC experiments will require challengingly new equipment for detector readout, event filtering, event building and storage. The Fastbus and VME-based tree architectures of the eighties run out of steam when applied to LHC's requirements. New concepts and architectures from the nineties have substituted rack-mounting backplane buses for high speed point-to-point links, abandoned centralized event building, and instead use switched networks and parallel architectures. Following these trends, and in the context of DAQ and trigger systems for LHC experiments, this paper summarizes the earlier architectures and presents the new concepts for DAQ.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call