Abstract

Many recent researches focus on ICN (Information-Centric Network), in which named content becomes the first citizen instead of end-host. In ICN, Named content can be further divided into many small sized chunks, and chunk-based communication has merits over content-based communication. The universal in-network cache is one of the fundamental infrastructures for ICN. In this work, a chunk-level cache mechanism based on pre-fetch operation is proposed. The main idea is that, routers with cache store should pre-fetch and cache the next chunks which may be accessed in the near future according to received requests and cache policy for reducing the users’ perceived latency. Two pre-fetch driven modes are present to answer when and how to pre-fetch. The LRU (Least Recently Used) is employed for the cache replacement. Simulation results show that the average user perceived latency and hops can be decreased by employed this cache mechanism based on pre-fetch operation. Furthermore, we also demonstrate that the results are influenced by many factors, such as the cache capacity, Zipf parameters and pre-fetch window size.

Highlights

  • Most of the network applications and services only care about content distribution and retrieval, while the current Internet still relies on the host-to-host communication model

  • We demonstrate that the results are influenced by many factors, such as the cache capacity, Zipf parameters and pre-fetch window size

  • The cache scheme based on pre-fetch operation of Information-Centric Network (ICN) is proposed in this paper

Read more

Summary

Introduction

Most of the network applications and services only care about content distribution and retrieval, while the current Internet still relies on the host-to-host communication model. Caching and pre-fetching operation are well-known strategies deployed in application layer for improving the performance of current network. Pre-fetch operation is likely a strategy to hide retrieval network latency for the user rather than to reduce it. Because of the characteristics of universal caching and chunk-based communication in ICN, routers could pre-fetch the following chunks for a given content, avoiding complex predication algorithm in web pre-fetch operation. Our goal is to reduce the latency experienced by clients by considering cache policy based on pre-fetch operation. Our main contribution is trying to answer how to design an efficient cache scheme based on pre-fetch operation to reduce latency, and measure the effect of pre-fetch windows for improving performance

Preliminary Knowledge
Cache Scheme Based on Prefetching Operation
Problem illustration
Pre-fetch operation
Qualitative analysis
Performance Evaluations
Simulation scenarios
Performance results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call