Abstract

Web caching plays a key role in delivering web items to end users in World Wide Web (WWW).On the other hand, cache size is considered as a limitation of web caching.Furthermore, retrieving the same media object from the origin server many times consumes the network bandwidth. Furthermore, full caching for media objects is not a practical solution and consumes cache storage in keeping few media objects because of its limited capacity. Moreover, traditional web caching policies such as Least Recently Used (LRU), Least Frequently Used (LFU), and Greedy Dual Size (GDS) suffer from caching pollution (i.e. media objects that are stored in the cache are not frequently visited which negatively affects on the performance of web proxy caching). In this work, intelligent cooperative web caching approaches based on J48 decision tree and Naive Bayes (NB) supervised machine learning algorithms are presented. The proposed approaches take the advantages of structured peer-to-peer systems where the contents of peers’ caches are shared using Distributed Hash Table (DHT) in order to enhance the performance of the web caching policy. The performance of the proposed approaches is evaluated by running a trace-driven simulation on a dataset that is collected from IRCache network. The results demonstrate that the new proposed policies improve the performance of traditional web caching policies that are LRU, LFU, and GDS in terms of Hit Ratio (HR) and Byte Hit Ratio (BHR). Moreover, the results are compared to the most relevant and state-of-the-art web proxy caching policies. Ratio (HR) and Byte Hit Ratio (BHR). Moreover, the results are compared to the most relevant and state-of-the-art web proxy caching policies.

Highlights

  • Web caching is a technique where local copies of web page are stored in places close to the end-users

  • Traditional web caching algorithms such as Least Recently Used (LRU), Least Frequently Used (LFU), and Greedy Dual Size (GDS) suffer from caching pollution where the most popular media objects get the most requests, while a large portion of media objects that are stored in the cache are not frequently visited (Koskela et al, 2003; Arlitt et al, 2000)

  • Traditional web caching policies such as LRU, LFU, and GDS suffer from caching pollution where media objects that are stored in the cache are not frequently visited which negatively affects on the performance of web proxy caching

Read more

Summary

Introduction

Web caching is a technique where local copies of web page are stored in places close to the end-users. Traditional web caching algorithms such as Least Recently Used (LRU), Least Frequently Used (LFU), and Greedy Dual Size (GDS) suffer from caching pollution where the most popular media objects get the most requests, while a large portion of media objects that are stored in the cache are not frequently visited (Koskela et al, 2003; Arlitt et al, 2000). It is common that a web caching policy is defined according to the cache replacement algorithm which is used by this web caching policy. This work presents new policies based on machine learning algorithms and sharing information about peers caches’ contents in order to enhance traditional web caching policies that are LRU, LFU, and GDS

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call