Quantitatively connecting properties of parton distribution functions (PDFs, or parton densities) to the theoretical assumptions made within the QCD analyses which produce them has been a longstanding problem in HEP phenomenology. To confront this challenge, we introduce an ML-based explainability framework, XAI4PDF, to classify PDFs by parton flavor or underlying theoretical model using ResNet-like neural networks (NNs). By leveraging the differentiable nature of ResNet models, this approach deploys guided backpropagation to dissect relevant features of fitted PDFs, identifying x-dependent signatures of PDFs important to the ML model classifications. By applying our framework, we are able to sort PDFs according to the analysis which produced them while constructing quantitative, human-readable maps locating the x regions most affected by the internal theory assumptions going into each analysis. This technique expands the toolkit available to PDF analysis and adjacent particle phenomenology while pointing to promising generalizations.
Read full abstract