Abstract

Cache-aided content delivery is studied in a multi-server system with $P$ servers and $K$ users, each equipped with a local cache memory. In the delivery phase, each user connects randomly to any $\rho$ out of $P$ servers. Thanks to the availability of multiple servers, which model small-cell base stations (SBSs), demands can be satisfied with reduced storage capacity at each server and reduced delivery rate per server; however, this also leads to reduced multicasting opportunities compared to the single-server scenario. A joint storage and proactive caching scheme is proposed, which exploits coded storage across the servers, uncoded cache placement at the users, and coded delivery. The delivery \textit{latency} is studied for both \textit{successive} and \textit{parallel} transmissions from the servers. It is shown that, with successive transmissions the achievable average delivery latency is comparable to the one achieved in the single-server scenario, while the gap between the two depends on $\rho$, the available redundancy across the servers, and can be reduced by increasing the storage capacity at the SBSs. The optimality of the proposed scheme with uncoded cache placement and MDS-coded server storage is also proved for successive transmissions.

Highlights

  • Coded caching and distributed storage have received significant attention in recent years to exploit the available memory space and processing power of individual network nodes to increase the throughput and efficiency of data availability

  • Coding for distributed storage systems has been extensively studied in the literature, and in the femtocaching scenario, ideal maximum distance separable (MDS) codes allow users to recover contents by collecting parity bits from only a subset of small-cell base stations (SBSs) they connect to [4]

  • In the successive transmission scenario, we show that the cost of the flexibility of distributed storage is a scaling of the latency by a constant

Read more

Summary

INTRODUCTION

Coded caching and distributed storage have received significant attention in recent years to exploit the available memory space and processing power of individual network nodes to increase the throughput and efficiency of data availability. Storing the files across multiple SBSs, and allowing users to connect randomly to a subset of them results in a loss in multicasting opportunities for the servers, indicating a trade-off between the coded caching gain and the flexibility provided by distributed storage across the servers, which, to the best of our knowledge, has not been studied before. The authors study the delivery latency considering parallel transmissions from the servers, and show that there is a gain from using multiple servers when the relay nodes employ simple random linear network coding. Compared to the linear network model, our model corresponds to an identity network transfer matrix, in which the scheme of [11] does not provide any gains, since it is not optimized for the realization of the topology Another line of related works study caching in combination networks [12], [14], which consider a single server serving cache-equipped users through multiple relay nodes. XA (Xa1 , . . . , Xap ). 1E denotes the indicator function of the event E, i.e., its value is 1 when the event E happens. x denotes the largest integer less than or equal to x. x denotes the smallest integer greater than or equal to x

PROBLEM SETTING
Formal Problem Statement
Server Storage Placement
User Cache Placement
Delivery Phase
Redundancy in Server Storage Capacity
Performance analysis
LOWER BOUND
If the rate tuple
Redundancy in server storage
PARALLEL SBS TRANSMISSIONS
Redundant server storage capacity
RESULTS AND DISCUSSIONS
VIII. CONCLUSIONS AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call