Abstract

Recently, a new class of decentralized random coded caching schemes have received increasing interest, as they can achieve order-optimal memory-load tradeoff through decentralized content placement when the file size goes to infinity. However, most of these existing decentralized schemes may not provide enough coded- multicasting opportunities in the practical operating regime where the file size is limited. In this paper, we focus on the finite file size regime and propose a decentralized random coded caching scheme and a partially decentralized sequential coded caching scheme. These two schemes have different requirements on coordination in the content placement phase and can be applied to different scenarios. The content placement of the proposed schemes aims at ensuring abundant coded-multicasting opportunities in the content delivery phase when the file size is finite. We analyze the worst-case (over all possible requests) loads of our schemes and show that the sequential coded caching scheme outperforms the random coded caching scheme in the finite file size regime. Analytical results indicate that, when the file size grows to infinity, the proposed schemes achieve the same memory- load tradeoff as Maddah-Ali-Niesen's decentralized scheme, and hence are also order optimal. Numerical results show that the two proposed schemes outperform Maddah-Ali-Niesen's decentralized scheme when the file size is not very large.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.