Abstract

SummaryThis article proposes a methodology for constructing and assessing the quality of benchmark‐suites dedicated to understand server behavior. Applications are multiple: behavioral models creation, power capping, abnormal behavior detection, model validation. To reach all the operating points of one server, an automatic operating range detection is necessary. To get the knowledge of the operating range, an exhaustive search of possible reachable states must be conducted. While many works use for this purpose only simplistic benchmark‐suites in their research, we show in this article that using simple assumptions (leading to simple models such as linear ones) introduces large bias. We identify the bias due to individual hardware components. The key for understanding and modeling of the system behavior (from performance and power consumption points of view) is to stress the system and its subsystems on a large set of configurations, and to collect values spanning over a large spectrum. We define a coverage metric for evaluating the coverage of measured performance indicators and power values. We evaluate different benchmarks using that metric, and thoroughly analyze their impact on the collected values. Finally, we propose a benchmark‐suite providing a large coverage, adapted to general cases.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.