Abstract

The choice of a point set, to be used in numerical integration, determines, to a large extent, the error estimate of the integral. Point sets can be characterized by their discrepancy, which is a measure of their nonuniformity. Point sets with a discrepancy that is low with respect to the expected value for truly random point sets, are generally thought to be desirable. A low value of the discrepancy implies a negative correlation between the points, which may be usefully employed to improve the error estimate of a numerical integral based on the point set. We apply the formalism developed in a previous publication to compute this correlation for one-dimensional point sets, using a few different definitions of discrepancy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call