Abstract

The importance of performance measurement and evaluation has long been recognized in the field of computer science. It is used in the analysis of existing systems, to make projections on the performance of new or modified systems, and in the design and selection of new hardware and software. The tools and methods used in performance measurement and evaluation are varied. They include such things as timings, benchmarks, simulations, analytic modeling, and both hardware and software monitors. The ones used in a given instance depend upon the goals of the investigator and the system being studied.In “Proposed Automated Information Management at NASA: Its Performance Measurement and Evaluation” Rebecca R. Bogart presents performance measurements that are being considered for use in evaluating an information system that is still in the planning stages. This system will be an automated information system for a network that includes NASA headquarters and ten NASA centers. The paper describes the goals and objectives of this system and an evaluation technique that can be used to determine how well the goals and objectives are met. The primary tools proposed are software monitors embedded in the system which will gather system performance statistics. The use of user questionnaires is planned to augment the evaluation effort.John E. Tolle used transaction log analysis and stochastic processes in the study described in his paper, “Performance Measurement and Evaluation of Online Information Systems.” Transaction logs, records of user commands and system responses, were gathered from several different online information systems and analyzed. This paper describes how desired data were obtained from these logs and how they were used in the stochastic processes. The primary objective of this study was to discover the extent to which online systems were used and to determine the patterns of user commands when conducting information searches. This information could then be used to determine how well the designs of the systems support the demands placed on them by the users.In “Workload Models for DBMS Performance Evaluation” Evans J. Adams defines a hierarchy of workload models to support performance measurement and evaluation of database management systems. Each successive layer in the hierarchy is characterized in progressively greater detail going from a user's view of the conceptual data model down to the underlying machine at the lowest level. He describes techniques for deriving the workload models at each level in the hierarchy and identifies, for each level, measurement parameters and performance metrics. This hierarchical model is proposed as a framework for constructing a DBMS performance analyst's workbench and the incorporation of the workbench into future DBMSs is suggested.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call