Abstract

We present two-stage experiment designs for use in simulation experiments that compare systems in terms of their expected (long-run average) performance. These procedures simultaneously achieve the following with a prespecified probability of being correct: (i) find the best system or a near-best system; (ii) identify a subset of systems that are more than a practically insignificant difference from the best; and (iii) provide a lower confidence bound on the probability that the best or near-best system will be selected. All of the procedures assume normally distributed data, but versions allow unequal variances and common random numbers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call