Abstract

Benchmarks are vital tools in the performance measurement, evaluation, and comparison of computer hardware and software systems. Standard benchmarks such as the TREC, TPC, SPEC, SAP, Oracle, Microsoft, IBM, Wisconsin, AS 3AP, OO1, OO7, XOO7 benchmarks have been used to assess the system performance. These benchmarks are domain-specific and domain-dependent in that they model typical applications and tie to a problem domain. Test results from these benchmarks are estimates of possible system performance for certain pre-determined problem types. When the user domain differs from the standard problem domain or when the application workload is divergent from the standard workload, they do not provide an accurate way to measure the system performance of the user problem domain. System performance of the actual problem domain in terms of data and transactions may vary significantly from the standard benchmarks. In this research, we address the issue of generalization and precision of benchmark workload model for web search technology. The current performance measurement and evaluation method suffers from the rough estimate of system performance which varies widely when the problem domain changes. The performance results provided by the vendors cannot be reproduced nor reused in the real users’ environment. Hence, in this research, we tackle the issue of domain boundness and workload boundness which represents the root of the problem of imprecise, ir-representative, and ir-reproducible performance results. We address the issue by presenting a domain-independent and workload-independent workload model benchmark method which is developed from the perspective of the user requirements and generic constructs. We present a user-driven workload model to develop a benchmark in a process of workload requirements representation, transformation, and generation via the common carrier of generic constructs. We aim to create a more generalized and precise evaluation method which derives test suites from the actual user domain and application setting. The workload model benchmark method comprises three main components. They are a high-level workload specification scheme, a translator of the scheme, and a set of generators to generate the test database and the test suite. They are based on the generic constructs. The specification scheme is used to formalize the workload requirements. The translator is used to transform the specification. The generator is used to produce the test database and the test workload. We determine the generic constructs via the analysis of search methods. The generic constructs form a page model, a query model, and a control model in the workload model development. The page model describes the web page structure. The query model defines the logics to query the web. The control model defines the control variables to set up the experiments. In this study, we have conducted ten baseline research experiments to validate the feasibility and validity of the benchmark method. An experimental prototype is built to execute these experiments. Experimental results demonstrate that the method based on generic constructs and driven by the perspective of user requirements is capable of modeling the standard benchmarks as well as more general benchmark requirements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call