Abstract

We consider Monte Carlo algorithms for computing an integral θ=∫fdπ which is positive but can be arbitrarily close to 0. It is assumed that we can generate a sequence Xn of uniformly bounded random variables with expectation θ. Estimator θˆ=θˆ(X1,X2,…,XN) is called an (ε,α)-approximation if it has fixed relative precision ε at a given level of confidence 1−α, that is it satisfies P(|θˆ−θ|≤εθ)≥1−α for all problem instances. Such an estimator exists only if we allow the sample size N to be random and adaptively chosen.We propose an (ε,α)-approximation for which the cost, that is the expected number of samples, satisfies EN∼2lnα−1/(θε2) for ε→0 and α→0. The main tool in the analysis is a new exponential inequality for randomly stopped sums.We also derive a lower bound on the worst case complexity of the (ε,α)-approximation. This bound behaves as 2lnα−1/(θε2). Thus the worst case efficiency of our algorithm, understood as the ratio of the lower bound to the expected sample size EN, approaches 1 if ε→0 and α→0.An L2 analogue is to find θˆ such that E(θˆ−θ)2≤ε2θ2. We derive an algorithm with the expected cost EN∼1/(θε2) for ε→0. To this end, we prove an inequality for the mean square error of randomly stopped sums. A corresponding lower bound also behaves as 1/(θε2). The worst case efficiency of our algorithm, in the L2 sense, approaches 1 if ε→0.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call