Abstract

Future FAIR experiments have to deal with very high input rates, large track multiplicities, make full event reconstruction and selection on-line on a large dedicated computer farm equipped with heterogeneous many-core CPU/GPU compute nodes. To develop efficient and fast algorithms, which are optimized for parallel computations, is a challenge for the groups of experts dealing with the HPC computing. Here we present and discuss the status and perspectives of the data reconstruction and physics analysis software of one of the future FAIR experiments, namely, the CBM experiment.

Highlights

  • The CBM (Compressed Baryonic Matter) experiment [1] is an experiment being prepared to operate at the future Facility for Anti-Proton and Ion Research (FAIR, Darmstadt, Germany)

  • Future FAIR experiments have to deal with very high input rates, large track multiplicities, make full event reconstruction and selection on-line on a large dedicated computer farm equipped with heterogeneous many-core central processing units (CPU)/graphics processing units (GPU) compute nodes

  • The First Level Event Selection (FLES) package consists of several modules: track finder, track fitter, particle finder and physics selection

Read more

Summary

Introduction

The CBM (Compressed Baryonic Matter) experiment [1] is an experiment being prepared to operate at the future Facility for Anti-Proton and Ion Research (FAIR, Darmstadt, Germany). One of the efficient features supported by almost all modern processors is the SIMD (Single Instruction, Multiple Data, vector operations) instruction set It allows to pack several data values into a vector register and to work with them simultaneously getting a factor more calculations per clock cycle. To illustrate the complexity of the HPC hardware, let us consider a single work-node of an HLT computer farm, a server equipped with CPUs only It has 2 to 4 sockets with 8 cores each. F = 4 sockets × 8 cores × 1.3 threads × 8 SIMD ≈ 300, which is already equivalent to a moderate computer farm with scalar single-core CPUs. In order to investigate the HPC hardware and to develop efficient algorithms we use different nodes and clusters in several high-energy physics centers over the worlds (see Tab. 1) ranging from dozens to thousands of cores

Parallel programming
Track finding at high track multiplicities
In-event parallelism of the CA track finder
Findings
11 Summary
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call