Abstract

In applications of machine learning to particle physics, a persistent challenge is how to go beyond discrimination to learn about the underlying physics. To this end, a powerful tool would be a framework for unsupervised learning, where the machine learns the intricate high-dimensional contours of the data upon which it is trained, without reference to pre-established labels. In order to approach such a complex task, an unsupervised network must be structured intelligently, based on a qualitative understanding of the data. In this paper, we scaffold the neural network’s architecture around a leading-order model of the physics underlying the data. In addition to making unsupervised learning tractable, this design actually alleviates existing tensions between performance and interpretability. We call the framework Junipr: “Jets from UNsupervised Interpretable PRobabilistic models”. In this approach, the set of particle momenta composing a jet are clustered into a binary tree that the neural network examines sequentially. Training is unsupervised and unrestricted: the network could decide that the data bears little correspondence to the chosen tree structure. However, when there is a correspondence, the network’s output along the tree has a direct physical interpretation. Junipr models can perform discrimination tasks, through the statistically optimal likelihood-ratio test, and they permit visualizations of discrimination power at each branching in a jet’s tree. Additionally, Junipr models provide a probability distribution from which events can be drawn, providing a data-driven Monte Carlo generator. As a third application, Junipr models can reweight events from one (e.g. simulated) data set to agree with distributions from another (e.g. experimental) data set.

Highlights

  • The architecture of a neural network should be designed to process information efficiently, from the input data all the way through to the network’s final output

  • In [19], the pixel intensities in the two-dimensional jet image were combined into a vector, and a Fisher linear discriminant was used to find a plane in the high-dimensional space that maximally separates two different jet classes

  • Supervised learning is the optimization of a model to map input to output based on labeled inputoutput pairs in the training data. These training examples are typically simulated by Monte Carlo generators, in which case the labels come from the underlying physical processes being generated

Read more

Summary

Unsupervised learning in jet physics

2.1 begins by describing Junipr as a general probabilistic model, independent of the specific parametric form taken by the various functions it involves. To establish the framework clearly and generally, Sect. From this perspective, such a probabilistic model could be implemented in many different ways.

General probabilistic model
Neural network implementation
Training and validation
Training data
Approach to training
Validation of model components
Increasing the branching function resolution
Applications and results
Likelihood ratio discrimination
Generation from JUNIPR
Reweighting Monte Carlo events
Factorization and JUNIPR
The encoding of global information
Clustering algorithm independence
Anti-kt shower generator
Findings
Conclusions and outlook
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call