Abstract

Machine learning has the potential to aid our understanding of phase structures in lattice quantum field theories through the statistical analysis of Monte Carlo samples. Available algorithms, in particular those based on deep learning, often demonstrate remarkable performance in the search for previously unidentified features, but tend to lack transparency if applied naively. To address these shortcomings, we propose representation learning in combination with interpretability methods as a framework for the identification of observables. More specifically, we investigate action parameter regression as a pretext task while using layer-wise relevance propagation (LRP) to identify the most important observables depending on the location in the phase diagram. The approach is put to work in the context of a scalar Yukawa model in (2+1)d. First, we investigate a multilayer perceptron to determine an importance hierarchy of several predefined, standard observables. The method is then applied directly to the raw field configurations using a convolutional network, demonstrating the ability to reconstruct all order parameters from the learned filter weights. Based on our results, we argue that due to its broad applicability, attribution methods such as LRP could prove a useful and versatile tool in our search for new physical insights. In the case of the Yukawa model, it facilitates the construction of an observable that characterises the symmetric phase.

Highlights

  • Lattice simulations of quantum field theories have proven essential for the theoretical understanding of fundamental interactions from first principles, perhaps most prominently so in quantum chromodynamics

  • We train a multilayer perceptron (MLP) and a convolutional neural network (CNN) to infer the associated hopping parameter κ from a set of known observables (Approach A), as well as solely from the raw field configurations (Approach B), akin to [26]

  • Assuming a Gaussian distribution with fixed variance, this objective reduces to minimising the mean squared error (MSE), which we use as loss function in the following

Read more

Summary

INTRODUCTION

Lattice simulations of quantum field theories have proven essential for the theoretical understanding of fundamental interactions from first principles, perhaps most prominently so in quantum chromodynamics. In cases where such an understanding remains elusive, it may be instructive to search for so far unidentified structures in the data to better characterize the dynamics In this quest toward new physical insight, we turn to machine learning (ML) approaches, in particular from the subfield of deep learning [1]. One ansatz for the identification of relevant observables from lattice data is through representation learning, i.e., by training on a pretext task The rationale behind this approach is that the ML algorithm learns to recognize patterns which can be leveraged to construct observables from low-level features that characterize different phases. We use LRP to identify relevant filters and discuss how these align with physical knowledge This allows us to construct an observable that appears to be a distinctive feature of the paramagnetic phase.

YUKAWA THEORY
INSIGHTS FROM EXPLAINABLE AI
RESULTS
Importance hierarchies of known observables
Extracting observables from convolutional filters
CONCLUSIONS AND OUTLOOK
Dimensionless form of the Klein-Gordon action
Simulating fermions
Rj ðxi Þ
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call