Abstract

In this work we assess the transferability of deep learning models to detect beyond the standard model signals. For this we trained Deep Neural Networks on three different signal models: $tZ$ production via a flavour changing neutral current, pair-production of vector-like $T$-quarks via standard model gluon fusion and via a heavy gluon decay in a grid of 3 mass points: 1, 1.2 and 1.4 TeV. These networks were trained with $t\bar{t}$, $Z$+jets and dibosons as the main backgrounds. Limits were derived for each signal benchmark using the inference of networks trained on each signal independently, so that we can quantify the degradation of their discriminative power across different signal processes. We determine that the limits are compatible within uncertainties for all networks trained on signals with vector-like $T$-quarks, whether they are produced via heavy gluon decay or standard model gluon fusion. The network trained on flavour changing neutral current signal, while struggling the most on the other signals, still produce reasonable limits. These results indicate that deep learning models are capable of providing sensitivity in the search for new physics even if it manifests itself in models not assumed during training.

Highlights

  • Machine learning has a long history in high energy physics (HEP), we have recently witnessed a surge in interest in new methods and algorithms emerging from deep learning [1]

  • We determine that the limits are compatible within uncertainties for all networks trained on signals with vectorlike T-quarks, whether they are produced via heavy gluon decay or standard model gluon fusion

  • In this work we set out to explore the transferability of deep neural network (DNN) trained to discriminate between signal and background using reconstructed physical observables

Read more

Summary

Introduction

Machine learning has a long history in high energy physics (HEP), we have recently witnessed a surge in interest in new methods and algorithms emerging from deep learning [1]. It has been shown that deep learning models for a computer version trained on a certain task can be adapted to a different, albeit similar, task [3] as the layers closer to the inputs learn low-level features that progressively become higher level as they are transformed by the subsequent layers In computer vision this manifests as the first layers learn about localized pixel variations, the following learn about textures and patterns, and the last encode high-level features such as dog or cat. The limits computed on the sample with FCNC signal show clear degradation as we use any network trained on VLT signals. This is understood as the FCNC does not produce new heavy states, and as such its kinematics are manifestly different from those produced by VLT signals, being instead very similar to other SM processes

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call