Abstract

This paper compares two methods of assessing variability in simulation output. The methods make specific allowance for two sources of variation: that caused by uncertainty in estimating unknown input parameters (parameter uncertainty), and that caused by the inclusion of random variation within the simulation model itself (simulation uncertainty). The first method is based on classical statistical differential analysis; we show explicitly that, under general conditions, the two sources contribute separately to the total variation. In the classical approach, certain sensitivity coefficients have to be estimated. The effort needed to do this becomes progressively more expensive, increasing linearly with the number of unknown parameters. Moreover there is an additional difficulty of detecting spurious variation when the number of parameters is large. It is shown that a parametric form of bootstrap sampling provides an alternative method which does not suffer from either problem. For illustration, simulation of the operation of a small (4-node) computer communication network is used to compare the accuracy of estimates using the two methods. A larger, .more realistic, (30-node) network is presented showing how the bootstrap method becomes competitive when the number of unknown parameters is large.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.