Abstract

Machine learning (ML) enables the development of interatomic potentials with the accuracy of first principles methods while retaining the speed and parallel efficiency of empirical potentials. While ML potentials traditionally use atom-centered descriptors as inputs, different models such as linear regression and neural networks map descriptors to atomic energies and forces. This begs the question: what is the improvement in accuracy due to model complexity irrespective of descriptors? We curate three datasets to investigate this question in terms of ab initio energy and force errors: (1) solid and liquid silicon, (2) gallium nitride, and (3) the superionic conductor Li_{10}Ge(PS_{6})_{2} (LGPS). We further investigate how these errors affect simulated properties and verify if the improvement in fitting errors corresponds to measurable improvement in property prediction. By assessing different models, we observe correlations between fitting quantity (e.g. atomic force) error and simulated property error with respect to ab initio values.Graphical abstract

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call