Abstract

Prefetching is an important technique for single Web server to reduce the average Web access latency and applying it on cluster server will produce better performance. Two models for parallel Web prefetching on cluster server described in the form of I/O automaton are proposed in this paper according to the different service approaches of Web cluster server: session persistence and session non-persistence. Meanwhile, an advanced scheduling algorithm based on Web prefetching (Prefetch_LARD) is put forward. By mining the transition probability between pages from Web access logs, the algorithm builds up a prefetching model based on Markov chain. Experiments show that under the same test environment, Prefetch_LARD algorithm increases cache hit ratio in up to 26.9% and the throughput in up to 18.8% compared with the classical locality-aware request distribution (LARD) algorithm

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call