Abstract

Machine-learned interatomic potentials (MLIPs) are typically trained on datasets that encompass a restricted subset of possible input structures, which presents a potential challenge for their generalization to a broader range of systems outside the training set. Nevertheless, MLIPs have demonstrated impressive accuracy in predicting forces and energies in simulations involving intricate and complex structures. In this paper we aim to take steps towards rigorously explaining the excellent observed generalization properties of MLIPs. Specifically, we offer a comprehensive theoretical and numerical investigation of the generalization of MLIPs in the context of dislocation simulations. We quantify precisely how the accuracy of such simulations is directly determined by a few key factors: the size of the training structures, the choice of training observations (e.g., energies, forces, virials), and the level of accuracy achieved in the fitting process. Notably, our study reveals the crucial role of fitting virials in ensuring the consistency of MLIPs for dislocation simulations. Our series of careful numerical experiments encompassing screw, edge, and mixed dislocations, supports existing best practices in the MLIPs literature but also provides new insights into the design of data sets and loss functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call