Abstract

I thank each of the commentators for their additional insights and perspectives on simulation optimization. In this rejoinder, I will attempt to briefly highlight or respond to a few of the points in the commentaries. First, all three of the commentators agree with the basic thesis of an existing disconnect between research and commercially available software, though it is also clear that the gap was already starting to be bridged even as the first draft of the Feature Article was being prepared. Andradottir’s commentary points to these convergence of interests, most visibly by the development of statistical screening, ranking, and selection techniques that can be used in conjunction with any search routine, including the currently implemented ones. Additional current references provided in her commentary point the reader to these cutting-edge developments. In his commentary, Kelly asserts that the most recent software versions have incorporated, methods to address estimation variance in the search process. Glynn’s commentary includes further analysis and history as to the cause of the original disconnect, as manifested by the delay in including optimization routines in simulation software. Next, I wholeheartedly agree with Andradottir’s main thesis that many of the algorithms in the academic literature are in fact already suitable for implementation in commercial simulation software. I believe that Kelly’s response to this (hoping not to put too many words in his mouth and misrepresenting him) would be that these algorithms are much less efficient than what is currently implemented. As Kelly states, “In reality, obtaining a high-quality answer in the fewest number of evaluations is the core problem.” An aid in making the comparison of algorithms more objective would be the availability of a common set of representative test problems, advocated in the Feature Article and buttressed by Glynn’s commentary citing positive results in other areas. Furthermore, the goal advocated by Kelly is also the focus of the recent screening and selection approach of Nelson et al. (2001), the optimal computing budget allocation approach of Chen et al. (2000), and the closely related Bayesian approach of Chick and Inoue (2001). Having both the perspective of a current practitioner developing optimization algorithms for commercial simulation software and that of a former academic who is familiar with the research literature, Kelly’s comments provide some unique and provocative views. However, I want to clarify one sentence in the first paragraph of Kelly’s commentary, which states that “the problems academics solve are usually modeled with a small number of continuous variables without any constraints.” Lest the reader be misled into thinking that many of these algorithms are of academic use only, I point out that there are many “traditional” engineering system design, identification, and control real-world problems to which continuousvariable algorithms such as stochastic approximation have been applied successfully for many decades (and for large number of variables). These are often classified into the rather broader term “stochastic optimization,” rather than “simulation optimization,” on which the Feature Article focuses. Furthermore, even in discrete-event simulation, there are numerous examples in practice of large-scale practical problems solved using these techniques, though most of them are customized (i.e., problem-specific) rather than

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call