Abstract

The demand for computational resources is steadily increasing in experimental high energy physics as the current collider experiments continue to accumulate huge amounts of data and physicists indulge in more complex and ambitious analysis strategies. This is especially true in the fields of hadron spectroscopy and flavour physics where the analyses often depend on complex multidimensional unbinned maximum-likelihood fits, with several dozens of free parameters, with an aim to study the internal structure of hadrons. Graphics processing units (GPUs) represent one of the most sophisticated and versatile parallel computing architectures that are becoming popular toolkits for high energy physicists to meet their computational demands. GooFit is an upcoming open-source tool interfacing ROOT/RooFit to the CUDA platform on NVIDIA GPUs that acts as a bridge between the MINUIT minimization algorithm and a parallel processor, allowing probability density functions to be estimated on multiple cores simultaneously. In this article, a full-fledged amplitude analysis framework developed using GooFit is tested for its speed and reliability. The four-dimensional fitter framework, one of the firsts of its kind to be built on GooFit, is geared towards the search for exotic tetraquark states in the B^0 rightarrow J/psi K pi decays and can also be seamlessly adapted for other similar analyses. The GooFit fitter, running on GPUs, shows a remarkable improvement in the computing speed compared to a ROOT/RooFit implementation of the same analysis running on multi-core CPU clusters. Furthermore, it shows sensitivity to components with small contributions to the overall fit. It has the potential to be a powerful tool for sensitive and computationally intensive physics analyses.

Highlights

  • Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time, calling for parallel computing tools to analyze them [1, 2]

  • We explore the scope of an advanced Graphics processing unit (GPU)-accelerated computing framework to reduce the processing times of such complex multidimensional fits frequently occurring in the field of high-energy physics (HEP)

  • The fitter implemented in ROOT/RooFit is run on an Intel Xeon cluster with 24 CPUs whereas the GooFit version is run on NVIDIA Tesla K40 GPU with 2880 CUDA cores

Read more

Summary

Introduction

Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process data within a tolerable elapsed time, calling for parallel computing tools to analyze them [1, 2]. Modern particle accelerators such as the Large Hadron Collider (LHC) [3] at European Organization for Nuclear. Massive and intricate detectors, such as ATLAS [5], CMS [6], and LHCb [7], are built around these collision points to detect the huge number of particles created due to the 600 million collisions taking place per second. Even after preserving only a fraction of that data stream for physics analysis, hundreds of petabytes of complex data are stored and processed [8, 9]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call