Abstract

This paper considers the multiaccess coded caching systems formulated by Hachem <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">et al.</i> , including a central server containing <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$N$ </tex-math></inline-formula> files connected to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$K$ </tex-math></inline-formula> cache-less users through an error-free shared link, and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$K$ </tex-math></inline-formula> cache-nodes, each equipped with a cache memory size of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$M$ </tex-math></inline-formula> files. Each user has access to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$L$ </tex-math></inline-formula> neighbouring cache-nodes with a cyclic wrap-around topology. The coded caching scheme proposed by Hachem <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">et al.</i> suffers from the case that <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$L$ </tex-math></inline-formula> does not divide <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$K$ </tex-math></inline-formula> , where the needed number of transmissions (a.k.a. load) is at most four times the load expression for the case where <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$L$ </tex-math></inline-formula> divides <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$K$ </tex-math></inline-formula> . Our main contribution is to propose a novel <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">transformation</i> approach to smartly extend the schemes satisfying some conditions for the well known shared-link caching systems to the multiaccess caching systems. Then we can get many coded caching schemes with different subpacketizations for multiaccess coded caching system. These resulting schemes have the maximum local caching gain (i.e., the cached contents stored at any <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$L$ </tex-math></inline-formula> neighbouring cache-nodes are different such that the number of retrieval packets by each user from the connected cache-nodes is maximal) and the same coded caching gain as the original schemes. Applying the transformation approach to the well-known shared-link coded caching scheme proposed by Maddah-Ali and Niesen, we obtain a new multiaccess coded caching scheme that achieves the same load as the scheme of Hachem <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">et al.</i> but for any system parameters. Under the constraint of the cache placement used in this new multiaccess coded caching scheme, our delivery strategy is approximately optimal when <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$K$ </tex-math></inline-formula> is sufficiently large. Finally, we also show that the transmission load of the proposed scheme can be further reduced by compressing the multicast message.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call