Abstract
Abstract New hard-scattering measurements from the LHC proton-lead run have the potential to provide important constraints on the nuclear parton distributions and thus contributing to a better understanding of the initial state in heavy ion collisions. In order to quantify these constraints, as well as to assess the compatibility with available nuclear data from fixed target experiments and from RHIC, the traditional strategy is to perform a global fit of nuclear PDFs. This procedure is however time consuming and technically challenging, and moreover can only be performed by the PDF fitters themselves. In the case of proton PDFs, an alternative approach has been suggested that uses Bayesian inference to propagate the effects of new data into the PDFs without the need of refitting. In this work, we apply this reweighting procedure to study the impact on nuclear PDFs of low-mass Drell-Yan and single-inclusive hadroproduction pseudo-data from proton-lead collisions at the LHC as representative examples. In the hadroproduction case, in addition we assess the possibility of discriminating between the DGLAP and CGC production frameworks. We find that LHC proton-lead data could lead to a substantial reduction of the uncertainties on nuclear PDFs, in particular for the small-x gluon PDFs where uncertainties could decrease by up to a factor two. The Monte Carlo replicas of EPS09 used in the analysis are released as a public code for general use. It can be directly used, in particular, by the experimental collaborations to check, in a straightforward manner, the degree of compatibility of the new data with the global nPDF analyses.
Highlights
PDFs determination and to determine its impact on the central values and uncertainties of the PDFs
In the case of hadroproduction, in addition pseudo-data has been simulated in the Color Glass Condensate (CGC) framework: this allows to quantify to which extent non-linear effects in charged hadron production can be absorbed in a global nPDF fit based on the DGLAP framework, While our analysis is based on the EPS09 nuclear PDF set, the qualitative results should be valid for all other nPDF sets
While the derivation above that leads to the weights for each replica eq (2.3) applies to PDF sets based on the Monte Carlo method, the goal of this paper is to study the impact on nuclear PDFs, for which all sets with uncertainty bands available are based on the Hessian method
Summary
PDF uncertainties can be determined using basically two methods: the Hessian approach (with and without tolerance), upon which all nuclear PDF sets are based, and the Monte. While a new fit would give the best representation of the underlying density for a given Nrep, this is not the case for the PDF reweighting. The replicas with very small weights will become almost irrelevant when computing averages and the accuracy of the representation of the underlying distribution Pnew(f ) will decrease To quantify this efficiency loss, the Shannon entropy can be used to compute Neff , the effective number of replicas after reweighting: Neff ≡ exp wk log(Nrep /wk ). We have checked the consistency of the procedure for all nuclear PDFs, by comparing central values and 1-sigma uncertainties in the Hessian and MC versions of EPS09, and finding reasonable agreement within the statistical accuracy expected. [26] proposes an alternative approach for including new data into a nuclear PDF fit similar to the reweighting method just discussed Let us mention that ref. [26] proposes an alternative approach for including new data into a nuclear PDF fit similar to the reweighting method just discussed
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.