Abstract

Developers of computer systems are frequently faced with a choice among several possible implementations of a given function. Similary, the operator of a computer system faces a choice in setting adjustable parameters built into the system. In each case, the alternative that optimizes system performance is sought. While analytic or simulation models may sometimes assist in making the choice, it is true that in most cases only actual running tests of the various alternatives can provide definitive answers. But even when all the variations can be tested on a real system, it is not easy to obtain meaningful results. One has the choice between performing the tests on a carefully controlled benchmark job stream, or on the system in an actual uncontrolled working environment. In the first case, the results may be inapplicable except to the specific benchmarks chosen, and a realistic benchmark stream is often hard to come by (this is particularly true for time-sharing or real-time computer systems). In the second case, random fluctuations in the magnitude and nature of the load placed on the system make it difficult to compare the results obtained under different system versions. Consequently, although performance evaluation is still possible, data must be gathered over a fairly long period of time. For instance, different versions of the CP-67 time-sharing system <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1,2</sup> were compared by Bard <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">3</sup> under actual operating conditions. However, several months' worth of data were required. To meet the objectives outlined above, results must be forthcoming at a faster pace.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call