Abstract

We consider a server with a finite number of files which can be partitioned and it is connected through a shared link to multiple users with cache memory. We study the performance of the system using average queuing delay at the server as a performance metric for different system models. Users in uncoded system caches equal fraction of files instead of the whole file. Requests for a particular file are merged until the start of servicing of that request. We compare the system models having single request queue and multiple request queues with different percentage of the files cached for uncoded caching scheme. Also, we propose a novel system model with coded caching scheme which segregates file requests by grouping the requests based on popularity profile of the files. When the server is busy, pending file requests are queued in respective group queues instead of traditional user request queues. In this scheme, each user caches different percentage of file based on file's popularity such that higher percentage of cache will be allocated for most popular files. We have compared proposed system model and multiple user request queues model with coded multicasting. We have drawn conclusions about the average queuing delay at the server by comparing uncoded and coded caching schemes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.