Abstract

Cache-aided content delivery is studied in a multi-server system with $P$ servers and $K$ users, each equipped with a local cache memory. In the delivery phase, each user connects randomly to any $\rho$ out of $P$ servers. Thanks to the availability of multiple servers, which model small base stations with limited storage capacity, user demands can be satisfied with reduced storage capacity at each server and reduced delivery rate per server; however, this also leads to reduced multicasting opportunities compared to a single server serving all the users simultaneously. A joint storage and proactive caching scheme is proposed, which exploits coded storage across the servers, uncoded cache placement at the users, and coded delivery. The delivery \textit{latency} is studied for both \textit{successive} and \textit{simultaneous} transmission from the servers. It is shown that, with successive transmission the achievable average delivery latency is comparable to that achieved by a single server, while the gap between the two depends on $\rho$, the available redundancy across servers, and can be reduced by increasing the storage capacity at the SBSs.

Highlights

  • The unprecedented growth in transmitted data volumes across networks necessitates design of more efficient delivery methods that can exploit the available memory space and processing power of individual network nodes to increase the throughput and efficiency of data availability

  • We study the impact of the topology on the sum and maximum delivery rates, and the trade-off between the server storage space and the average of these rates

  • The authors study the delivery delay considering simultaneous transmission from the servers. Our model considers both limited storage servers and random topology over the delivery network, which is unknown at the placement phase

Read more

Summary

Introduction

The unprecedented growth in transmitted data volumes across networks necessitates design of more efficient delivery methods that can exploit the available memory space and processing power of individual network nodes to increase the throughput and efficiency of data availability. Part of the data can be pushed into nodes’ local cache memories during off-peak hours, called the placement phase, to reduce the burden on the network, the wireless downlink, during peak-traffic periods when all the users place their requests, called the delivery phase. Intelligent design of the cache contents creates multicasting opportunities across users, and multiple demands can be satisfied simultaneously through coded delivery. Coded caching is able to utilize the cumulative cache capacity in the network to satisfy all the users at much lower rates, or equivalently with lower delivery latency [1] - [10]. Files are replicated or coded at multiple cache-equipped small base stations (SBSs) so that a user may reconstruct its request from only a subset of the available SBSs. SBSs can act as edge caches and provide contents to

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call