Abstract

The Large Hadron Collider (LHC) at the European Organisation for Nuclear Research (CERN) will be upgraded to further increase the instantaneous rate of particle collisions (luminosity) and become the High Luminosity LHC (HL-LHC). This increase in luminosity will significantly increase the number of particles interacting with the detector. The interaction of particles with a detector is referred to as “hit”. The HL-LHC will yield many more detector hits, which will pose a combinatorial challenge by using reconstruction algorithms to determine particle trajectories from those hits. This work explores the possibility of converting a novel graph neural network model, that can optimally take into account the sparse nature of the tracking detector data and their complex geometry, to a hybrid quantum-classical graph neural network that benefits from using variational quantum layers. We show that this hybrid model can perform similar to the classical approach. Also, we explore parametrized quantum circuits (PQC) with different expressibility and entangling capacities, and compare their training performance in order to quantify the expected benefits. These results can be used to build a future road map to further develop circuit-based hybrid quantum-classical graph neural networks.

Highlights

  • Particle accelerator experiments aim to understand the nature of particles by colliding groups of particles at high energies and try to observe creation of particles and their decays, e.g. to validate theories

  • We aim to give a complete overview on our developments, where we investigated the use of a hybrid quantum-classical graph neural network (QGNN) approach to solve the particle track reconstruction problem (Tuysuz et al 2020a, 2020b, 2020c) that has been trained on the publicly available TrackML Challenge dataset (Amrouche et al 2019, 2021)

  • The hybrid quantum-classical graph neural network (QGNN) model that we propose takes a graph as the input and returns a probability as the output for all edges of the initial graph

Read more

Summary

Introduction

Particle accelerator experiments aim to understand the nature of particles by colliding groups of particles at high energies and try to observe creation of particles and their decays, e.g. to validate theories. In order to achieve a high sensitivity, these experiments use advanced software and hardware. These experiments will require very fast processing units as the time between two consecutive collisions is very short (reaching up to 1 MHz for ATLAS and CMS according to The ATLAS Collaboration (2015), Contardo et al (2015)) and Albrecht et al (2019). A total disk and tape spaces of 990 PetaBytes and around 550 thousand CPU cores were pledged to LHC experiments in 2017 according to a report by CERN Computing Resources Scrutiny Group (CRSG) (Lucchesi 2017)

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call