Abstract

A simulation methodology is proposed to evaluate the performance of large-scale Web search engines hosted by datacenters. The salient features of the methodology are the use of models of parallel computing to overcome the complexities associated with the simulation of hardware and system software details; a circulating tokens approach to represent sequences of operations that compete for search engine resources; benchmark programs to measure the cost of relevant operations; and simulations driven by real user traces to consider the dynamics of user behavior. An experimental evaluation of the methodology, which ranges from clusters of processors to single multithreaded processors, shows that it can generate respective simulation programs capable of predicting performance in a precise and efficient manner.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.