Abstract

Constructing more expressive ansatz has been a primary focus for quantum Monte Carlo, aimed at more accurate ab initio calculations. However, with more powerful ansatz, e.g. various recent developed models based on neural-network architectures, the training becomes more difficult and expensive, which may have a counterproductive effect on the accuracy of calculation. In this work, we propose to make use of the training data to perform empirical variance extrapolation when using neural-network ansatz in variational Monte Carlo. We show that this approach can speed up the convergence and surpass the ansatz limitation to obtain an improved estimation of the energy. Moreover, variance extrapolation greatly enhances the error cancellation capability, resulting in significantly improved relative energy outcomes, which are the keys to chemistry and physics problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call