Abstract

This paper is concerned with the scalability of large-scale grid monitoring and information services, which are mainly used for the discovery of resources of interest. Large-scale grid monitoring systems have to balance between three competing performance metrics: query response time, imposed network overhead, and information freshness. Improving one of the three metrics will affect another; any solution will be based on a trade-off. The paper is motivated by the observation that existing grid monitoring systems can only be manually configured for a trade-off among the three metrics, which applies equally to all monitored resources; this implies that all resources in a grid are considered to be of equal importance. Assuming that in a large-scale grid setting this is unlikely to hold, the paper proposes an importance-based monitoring architecture for large-scale grid information services, based on an adaptation of the web crawling paradigm. The main idea is that, since not all resources are of equal importance, one can vary the trade-off based on the relative importance of the monitored resources. The proposed architecture is described and evaluated based on large-scale deployments of a prototype implementation on PlanetLab.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call