Abstract

The success of any M&S project within the DEDS domain is especially dependent upon the manner in which the experiments with the simulation program have been formulated and on the care with which the data flowing from these experiments is analysed. This Chapter explores these important matters. The fundamental feature that needs to be fully appreciated is that the data collected from the output variables of a DEDS simulation model are random variables. As a consequence, fundamental notions from statistical data analysis that have direct relevance to experiments with simulation models are reviewed and illustrated. Included here are the notions of correct replication of simulation runs within an experiment, confidence intervals and seeds for random number generation. The CERN Colt package provides the means to generate uncorrelated seeds and ABSmod/J provides a class to create confidence intervals. Many simulation studies in the DEDS domain fall in the category of steady state studies where data collection needs to be delayed until initial transient effects have dissipated. This introduces the need for a “warm-up period” to be embedded in the experimentation with the simulation model. Techniques for dealing with this issue are presented an illustrated using ABSmod/J which provides a class based on Welch’s method to determine a warm-up time. Implicit in the goal of a simulation study there is often the need to compare two or more alternate “designs” that have been identified for the system being studied. While this may appear to be straightforward, care must be taken to evaluate apparent differences between alternatives. Within a stochastic context, comparison of output is more than simply comparing point estimate values produced by the alternatives. Alternatives may be statistically the same even if they produce different point estimates. As well, confidence in the complete set of values produced by the various alternatives must be evaluated. Methods for dealing with such situations is presented and illustrated with examples. Often a simulation model incorporates a “performance measure” that is used to evaluate the behaviour of the system being studied together with a number of parameters that need to be adjusted to achieve the best possible value for the performance measure. When the number of parameters becomes large the search for best parameter values can give rise to significant computational overhead. The “design of experiments” is an area of study that has provided methods for exploring the parameter space in a more computationally efficient manner. An overview of the 2m factorial design from this body of work is provided to illustrate how the impact of parameters on the performance measure of interest can be evaluated This method is then applied to examples in order to illustrate how experiments can be designed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call