Abstract

We present studies of quantum algorithms exploiting machine learning to classify events of interest from background events, one of the most representative machine learning applications in high-energy physics. We focus on variational quantum approach to learn the properties of input data and evaluate the performance of the event classification using both simulators and quantum computing devices. Comparison of the performance with standard multi-variate classification techniques based on a boosted-decision tree and a deep neural network using classical computers shows that the quantum algorithm has comparable performance with the standard techniques at the considered ranges of the number of input variables and the size of training samples. The variational quantum algorithm is tested with quantum computers, demonstrating that the discrimination of interesting events from background is feasible. Characteristic behaviors observed during a learning process using quantum circuits with extended gate structures are discussed, as well as the implications of the current performance to the application in high-energy physics experiments.

Highlights

  • The field of particle physics has been recently driven by large experiments to collect and analyze data produced in particle collisions occurred using high-energy accelerators

  • It is apparent from the boosted decision tree (BDT) and deep neural network (DNN) curves that the performance of these two algorithms improves rapidly with increasing Netrvaeinnt and flattens out

  • We present studies of quantum machine learning for the event classification, commonly used as the application of conventional machine learning techniques

Read more

Summary

Introduction

The field of particle physics has been recently driven by large experiments to collect and analyze data produced in particle collisions occurred using high-energy accelerators. In high-energy physics (HEP) experiments, particles created by collisions are observed by layers of high-precision detectors surrounding collision points, producing a large amount of data. Computational resources are expected to be reduced for specific tasks by adopting relatively new techniques such as ML. This will continue over decades; for example, a next-generation proton–proton collider, called high-luminosity large hadron collider (HL-LHC) [2, 3], at CERN1 is expected to deliver a few exabytes of data every.

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call