Abstract

We describe a pilot project (GAP – GPU Application Project) for the use of GPUs (Graphics processing units) for online triggering applications in High Energy Physics experiments. Two major trends can be identified in the development of trigger and DAQ systems for particle physics experiments: the massive use of general-purpose commodity systems such as commercial multicore PC farms for data acquisition, and the reduction of trigger levels implemented in hardware, towards a fully software data selection system ("trigger-less"). The innovative approach presented here aims at exploiting the parallel computing power of commercial GPUs to perform fast computations in software not only in high level trigger levels but also in early trigger stages. General-purpose computing on GPUs is emerging as a new paradigm in several fields of science, although so far applications have been tailored to the specific strengths of such devices as accelerators in offline computation. With the steady reduction of GPU latencies, and the increase in link and memory throughputs, the use of such devices for real-time applications in high energy physics data acquisition and trigger systems is becoming relevant. We discuss in detail the use of online parallel computing on GPUs for synchronous low-level triggers with fixed latency. In particular we show preliminary results on a first test in the CERN NA62 experiment. The use of GPUs in high level triggers is also considered, the CERN ATLAS experiment being taken as a case study of possible applications.

Highlights

  • The scientific project described in this paper is based on the use of Graphic Processing Units (GPUs) for scientific computation

  • The use of a custom Network Interface Card (NIC) driver allows to perform the low-level trigger processing directly on a standard personal computers (PCs), but the results - in terms of total latency - might be affected by the PC characteristics because the data transfer time and its fluctuations are controlled by the host computer

  • For the lowest trigger levels, work has to be done in order to reduce the contributions to the total latency due to data transfer from the detectors to the GPU

Read more

Summary

Introduction

The scientific project described in this paper is based on the use of Graphic Processing Units (GPUs) for scientific computation. The use of a custom NIC driver allows to perform the low-level trigger processing directly on a standard PC, but the results - in terms of total latency - might be affected by the PC characteristics (mother-board, CPU, etc.) because the data transfer time and its fluctuations are controlled by the host computer. Adding to the APEnet+ design the logic to manage a standard GbE interface, NaNet is able to exploit the GPUDirect P2P capabilities of NVIDIA Fermi/Kepler GPUs equipping a hosting PC to directly inject into their memory an UDP input data stream from the detector front-end, with rates compatible with the low latency real-time requirements of the trigger system. NaNet CTRL is a hardware module in charge of managing the GbE flow by encapsulating

NIOS II UDP
A physics case
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call