Abstract

The use of QCD calculations that include the resummation of soft-collinear logarithms via parton-shower algorithms is currently not possible in PDF fits due to the high computational cost of evaluating observables for each variation of the PDFs. Unfortunately the interpolation methods that are otherwise applied to overcome this issue are not readily generalised to all-order parton-shower contributions. Instead, we propose an approximation based on training a neural network to predict the effect of varying the input parameters of a parton shower on the cross section in a given observable bin, interpolating between the variations of a training data set. This first publication focuses on providing a proof-of-principle for the method, by varying the shower dependence on αS for both a simplified shower model and a complete shower implementation for three different observables, the leading emission scale, the number of emissions and the Thrust event shape. The extension to the PDF dependence of the initial-state shower evolution that is needed for the application to PDF fits is left to a forthcoming publication.

Highlights

  • Instead we present here an approximate approach to parametrise the parton-shower dependences in a way that allows for a fast, a-posteriori reweighting of the observable

  • We propose an approximation based on training a neural network to predict the effect of varying the input parameters of a parton shower on the cross section in a given observable bin, interpolating between the variations of a training data set

  • A deep neural network (NN) has been proposed to mimic a parton shower algorithm [33]. This latter ansatz can not be applied to our goal of using all-order results in PDF fits, as it is applied on an event-by-event basis just as an ordinary parton-shower algorithm, whereas PDF fits require projections of the cross section on observables in order to achieve the fast evaluation times needed for the fit

Read more

Summary

Introduction

Instead we present here an approximate approach to parametrise the parton-shower dependences in a way that allows for a fast, a-posteriori reweighting of the observable. The variation of the result (in a given observable bin) across this set is used to train a neural network (NN), effectively fitting the unknown functional form that encodes the dependences of the parton shower on the input parameters. This NN can be used to obtain efficiently an interpolation of the observable for arbitrary values of the input parameters, so that it is suitable to use this methodology in studies that require fast a-posteriori variations. The Sudakov form factor gives the probability that no (resolvable) emission occur between two emission scales tlow < thigh: thigh

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call