Abstract

High energy physics (HEP) experiments at the LHC generate data at a rate of $\mathcal{O}(10)$ Terabits per second. This data rate is expected to exponentially increase as experiments will be upgraded in the future to achieve higher collision energies. The increasing size of particle physics datasets combined with the plateauing single-core CPU performance is expected to create a four-fold shortage in computing power by 2030. This makes it necessary to investigate alternate computing architectures to cope with the next generation of HEP experiments. This study provides an overview of different computing techniques used in the LHCb experiment (trigger, track reconstruction, vertex reconstruction, particle identification). Furthermore, this research led to the creation of three event reconstruction algorithms for the LHCb experiment. These algorithms are benchmarked on various computing architectures such as the CPU, GPU, and a new type of processor called the IPU, each roughly containing $\mathcal{O}(10)$, $\mathcal{O}(1000)$, and $\mathcal{O}(1000)$ cores respectively. This research indicates that multi-core architectures such as GPUs and IPUs are better suited for computationally intensive tasks within HEP experiments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call