Abstract

The physics reach of the HL-LHC will be limited by how efficiently the experiments can use the available computing resources, i.e. affordable software and computing are essential. The development of novel methods for charged particle reconstruction at the HL-LHC incorporating machine learning techniques or based entirely on machine learning is a vibrant area of research. In the past two years, algorithms for track pattern recognition based on graph neural networks (GNNs) have emerged as a particularly promising approach. Previous work mainly aimed at establishing proof of principle. In the present document we describe new algorithms that can handle complex realistic detectors. The new algorithms are implemented in ACTS, a common framework for tracking software. This work aims at implementing a realistic GNN-based algorithm that can be deployed in an HL-LHC experiment.

Highlights

  • The LHC collider and the associated experiments are undergoing a major upgrade that will increase the size of the datasets of each of the general-purpose experiments ATLAS and CMS by one order of magnitude [1] compared to the initial LHC plan

  • In the present document we report initial results from a new effort to implement a realistic graph neural networks (GNNs)-based algorithm that can be deployed in an HL-LHC experiment

  • We present a novel algorithm for graph construction that can handle the complex geometry of a realistic detector, including full coverage in pseudo-rapidity (η), methods for memory management that allow GNN training on the full detector without any sectioning, and initial studies of GNNs trained on the full detector

Read more

Summary

Introduction

The LHC collider and the associated experiments are undergoing a major upgrade that will increase the size of the datasets of each of the general-purpose experiments ATLAS and CMS by one order of magnitude [1] compared to the initial LHC plan. These large datasets will enable precise measurements in the Higgs sector. The original LHC plan is to accumulate 300 fb−1 of data per experiment. During the HL-LHC phase, each experiment will accumulate at least 3000 fb−1 of data. The average number of pile-up events is expected to be 200

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call