Abstract

Machine learning-based neural network potentials have the ability to provide ab initio-level predictions while reaching large length and time scales often limited to empirical force fields. Traditionally, neural network potentials rely on a local description of atomic environments to achieve this scalability. These local descriptions result in short-range models that neglect long-range interactions necessary for processes like dielectric screening in polar liquids. Several approaches to including long-range electrostatic interactions within neural network models have appeared recently, and here we investigate the transferability of one such model, the self-consistent field neural network (SCFNN), which focuses on learning the physics associated with long-range response. By learning the essential physics, one can expect that such a neural network model should exhibit at least partial transferability. We illustrate this transferability by modeling dielectric saturation in a SCFNN model of water. We show that the SCFNN model can predict nonlinear response at high electric fields, including saturation of the dielectric constant, without training the model on these high field strengths and the resulting liquid configurations. We then use these simulations to examine the nuclear and electronic structure changes underlying dielectric saturation. Our results suggest that neural network models can exhibit transferability beyond the linear response regime and make genuine predictions when the relevant physics is properly learned.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call