Abstract

Before an investigator can claim that his simulation model is a useful tool for studying behavior under new hypothetical conditions, he is well advised to check its consistency with the true system, as it exists before any change is made. The success of this validation establishes a basis for confidence in results that the model generates under new conditions. After all, if a model cannot reproduce system behavior without change, then we hardly expect it to produce truly representative results with change.The problem of how to validate a simulation model arises in every simulation study in which some semblance of a system exists. The space devoted to validation in Naylor's book Computer Simulation Experiments with Models of Economic Systems indicates both the relative importance of the topic and the difficulty of establishing universally applicable criteria for accepting a simulation model as a valid representation.One way to approach the validation of a simulation model is through its three essential components; input, structural representation and output. For example, the input consist of exogenous stimuli that drive the model during a run. Consequently one would like to assure himself that the probability distributions and time series representations used to characterize input variables are consistent with available data. With regard to structural representation one would like to test whether or not the mathematical and logical representations do not conflict with the true system's behavior. With regard to output one could feel comfortable with a simulation model if it behaved similarly to the true system when exposed to the same input.Interestingly enough, the greatest effort in model validation of large econometric models has concentrated on structural representation. No doubt this is due to the fact that regression methods, whether it be the simple leastsquares method or a more comprehensive simultaneous equations techniques, in addition to providing procedures for parameter estimation, facilitate hypothesis testing regarding structural representation. Because of the availability of these regression methods, it seems hard to believe that at least some part of a model's structural representation cannot be validated. Lamentably, some researchers choose to discount and avoid the use of available test procedures.With regard to input analysis, techniques exist for determining the temporal and probabilistic characteristics of exogeneous variables. For example the autoregressive---moving average schemes described in Box and Jenkins' book, Time Series Analysis: Forecasting and Control, are available today in canned statistical computer programs. Maximum likelihood estimation procedures are available for most common probability distribution and tables based on sufficient statistics have begun to appear in the literature. Regardless of how little data is available, a model's use would benefit from a conscientious effort to characterize the mechanism that produced those data.As mentioned earlier a check of consistency between model and system output in response to the same input would be an appropriate step in validation. A natural question that arises is: What form should the consistency check take? One approach might go as follows: Let X1, ..., Xn be the model's output in n consecutive time intervals and let Y1, ..., Yn be the system's output for n consecutive time intervals in response to the same stimuli. Test the hypothesis that the joint probability distribution of X1, ..., Xn is identical with that of Y1, ..., Yn.My own feeling is that the above test is too stringent and creates a misplaced emphasis on statistical exactness. I would prefer to frame output validation in more of a decision making context. In particular, one question that seems useful to answer is: In response to the same input, does the model's output lead decision makers to take the same action that they would take in response to the true system's output? While less stringent than the test first described, its implementation requires access to decision makers. This seems to me to be a desirable requirement for only through continual interaction with decision makers can an investigator hope to gauge the sensitive issues to which his model should be responsive and the degree of accuracy that these sensitivities require.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call