Abstract

A novel technique using machine learning (ML) to reduce the computational cost of evaluating lattice quantum chromodynamics (QCD) observables is presented. The ML is trained on a subset of background gauge field configurations, called the labeled set, to predict an observable $O$ from the values of correlated, but less compute-intensive, observables $\mathbf{X}$ calculated on the full sample. By using a second subset, also part of the labeled set, we estimate the bias in the result predicted by the trained ML algorithm. A reduction in the computational cost by about $7\%-38\%$ is demonstrated for two different lattice QCD calculations using the Boosted decision tree regression ML algorithm: (1) prediction of the nucleon three-point correlation functions that yield isovector charges from the two-point correlation functions, and (2) prediction of the phase acquired by the neutron mass when a small Charge-Parity (CP) violating interaction, the quark chromoelectric dipole moment interaction, is added to QCD, again from the two-point correlation functions calculated without CP violation.

Highlights

  • Simulations of lattice QCD provide values of physical observables from correlation functions calculated as averages over gauge field configurations, which are generated using a Markov chain Monte Carlo method using the action as the Boltzmann weight [1,2]

  • We have proposed a Schwinger source method approach (SSM) [35,36] that exploits the fact that the chromoelectric dipole moment (cEDM) operator is a quark bilinear

  • The proposed machine learning (ML) algorithm used to predict compute-intensive observables from simpler measurements gives a modest computational cost reduction of 7%–38% depending on the observables analyzed here, as summarized in Tables IV (VP2) and V (P2)

Read more

Summary

INTRODUCTION

Simulations of lattice QCD provide values of physical observables from correlation functions calculated as averages over gauge field configurations, which are generated using a Markov chain Monte Carlo method using the action as the Boltzmann weight [1,2]. We introduce a general ML method for estimating observables calculated using expensive Markov chain Monte Carlo simulations of lattice QCD that reduce the computational cost. Consider M samples of independent measurements of a set of observables Xi 1⁄4 fo1i ; o2i ; o3i ; ...g, i 1⁄4 1; ...; M, but the target observable Oi is available only on N of these These N are called the labeled data, and the remaining. We account for the full error, including the sampling variance of the training and the bias correction datasets, by using a bootstrap procedure [10] that independently selects N labeled and M − N unlabeled items for each bootstrap sample.

EXPERIMENT A
EXPERIMENT B
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.