ConspectusDensity functional theory (DFT) calculations are used in over 40,000 scientific papers each year, in chemistry, materials science, and far beyond. DFT is extremely useful because it is computationally much less expensive than ab initio electronic structure methods and allows systems of considerably larger size to be treated. However, the accuracy of any Kohn-Sham DFT calculation is limited by the approximation chosen for the exchange-correlation (XC) energy. For more than half a century, humans have developed the art of such approximations, using general principles, empirical data, or a combination of both, typically yielding useful results, but with errors well above the chemical accuracy limit (1 kcal/mol). Over the last 15 years, machine learning (ML) has made major breakthroughs in many applications and is now being applied to electronic structure calculations. This recent rise of ML begs the question: Can ML propose or improve density functional approximations? Success could greatly enhance the accuracy and usefulness of DFT calculations without increasing the cost.In this work, we detail efforts in this direction, beginning with an elementary proof of principle from 2012, namely, finding the kinetic energy of several Fermions in a box using kernel ridge regression. This is an example of orbital-free DFT, for which a successful general-purpose scheme could make even DFT calculations run much faster. We trace the development of that work to state-of-the-art molecular dynamics simulations of resorcinol with chemical accuracy. By training on ab initio examples, one bypasses the need to find the XC functional explicitly. We also discuss how the exchange-correlation energy itself can be modeled with such methods, especially for strongly correlated materials. Finally, we show how deep neural networks with differentiable programming can be used to construct accurate density functionals from very few data points by using the Kohn-Sham equations themselves as a regularizer. All these cases show that ML can create approximations of greater accuracy than humans, and is capable of finding approximations that can deal with difficult cases such as strong correlation. However, such ML-designed functionals have not been implemented in standard codes because of one last great challenge: generalization. We discuss how effortlessly human-designed functionals can be applied to a wide range of situations, and how difficult that is for ML.
Read full abstract