Abstract

Fully automated classification methods that yield direct physical insights into phase diagrams are of current interest. Here, we demonstrate an unsupervised machine learning method for phase classification which is rendered interpretable via an analytical derivation of its optimal predictions and allows for an automated construction scheme for order parameters. Based on these findings, we propose and apply an alternative, physically-motivated, data-driven scheme which relies on the difference between mean input features. This mean-based method is computationally cheap and directly interpretable. As an example, we consider the physically rich ground-state phase diagram of the spinless Falicov-Kimball model.

Highlights

  • Phase diagrams and phase transitions are of paramount importance to physics [1,2,3]

  • While many-body systems have a large number of degrees of freedom, their phases are usually characterized by a small set of physical quantities like response functions or order parameters

  • We find that principal component analysis (PCA) and k-means clustering assigns configuration samples related through transformations of p4m to different clusters when using the raw configuration samples as input in the noise-free case

Read more

Summary

INTRODUCTION

Phase diagrams and phase transitions are of paramount importance to physics [1,2,3]. While many-body systems have a large number of degrees of freedom, their phases are usually characterized by a small set of physical quantities like response functions or order parameters. At each such point pi a set of samples {Si} is generated Based on these samples, a scalar indicator for phase transitions I (pi ) is calculated. We gain a full understanding of the resulting phase classification and the associated values of the indicator for phase transitions These insights pave the way for the key result of this paper: A physically motivated, general, data-driven, unsupervised phase classification approach. It relies on the difference between mean input features as an indicator for phase transitions (Fig. 1); it is conceptually simple. We derive the form of its optimal predictive model and discuss how this analytical expression makes the prediction-based method and its corresponding phase classification explainable.

FALICOV-KIMBALL MODEL
PREDICTION-BASED METHOD
NpNx p x
Phase diagrams
Optimal predictive model
Interpretation
MEAN-BASED METHOD
Correlation indicator
Generic indicators
COMPARISON WITH PRINCIPAL COMPONENT ANALYSIS AND k-MEANS CLUSTERING
CONCLUSION AND OUTLOOK
Neural network architecture
Training procedure
Vector field
Optimal predictive model in noise-free case
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call