Abstract

We have studied the statistical and systematic errors which arise in Monte Carlo simulations and how the magnitude of these errors depends on the size of the system being examined when a fixed amount of computer time is used. We find that, depending on the degree of self-averaging exhibited by the quantities measured, the statistical errors can increase, decrease, or stay the same as the system size is increased. The systematic underestimation of response functions due to the finite number of measurements made is also studied. We develop a scaling formalism to describe the size dependence of these errors, as well as their dependence on the “bin length” (size of the statistical sample), both at and away from a phase transition. The formalism is tested using simulations of thed=3 Ising model at the infinite-lattice transition temperature. We show that for a 96×96×96 system noticeable systematic errors (systematic underestimation of response functions) are still present for total run lengths of 106 Monte Carlo steps/site (MCS) with measurements taken at regular intervals of 10 MCS.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.