Abstract

We introduce a novel implementation of a reinforcement learning (RL) algorithm which is designed to find an optimal jet grooming strategy, a critical tool for collider experiments. The RL agent is trained with a reward function constructed to optimize the resulting jet properties, using both signal and background samples in a simultaneous multi-level training. We show that the grooming algorithm derived from the deep RL agent can match state-of-the-art techniques used at the Large Hadron Collider, resulting in improved mass resolution for boosted objects. Given a suitable reward function, the agent learns how to train a policy which optimally removes soft wide-angle radiation, allowing for a modular grooming technique that can be applied in a wide range of contexts. These results are accessible through the corresponding GroomRL framework.

Highlights

  • Jets are one of the most common objects appearing in proton-proton colliders such as the Large Hadron Collider (LHC) at CERN

  • For the applications in this article, we have implemented a Deep Q-Network (DQN) agent that contains a groomer module, which is defined by the underlying neural network (NN) model and the test policy used by the agent

  • We have shown a promising application of reinforcement learning (RL) to the issue of jet grooming

Read more

Summary

INTRODUCTION

Jets are one of the most common objects appearing in proton-proton colliders such as the Large Hadron Collider (LHC) at CERN They are defined as collimated bunches of high-energy particles, which emerge from the interactions of quarks and gluons, the fundamental constituents of the proton [1,2]. Due to the very high energies of its collisions, the LHC is routinely producing heavy particles, such as top quarks and vector bosons, with transverse momenta far greater than their rest mass. The trained model can be applied on other datasets, showing improved resolution compared to state-of-the-art techniques as well as a strong resilience to nonperturbative effects

JET REPRESENTATION
Grooming algorithm
SETTING UP A GROOMING ENVIRONMENT
Finding optimal hyperparameters
Defining a reward function
RL implementation and multilevel training
Determining the RL agent
Optimal GROOMRL model
Alternative approaches
JET MASS SPECTRUM
Robustness to nonperturbative effects
Findings
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.