Abstract

This paper considers optimal proactive caching when future demand predictions improve over time as expected to happen in most prediction systems. In particular, our model captures the correlated demand pattern that is exhibited by end users as their current activity reveals progressively more information about their future demand. It is observed in previous work that, in a network where service costs grow superlinearly with the traffic load and static predictions, proactive caching can be harnessed to flatten the load over time and minimize the cost. Nevertheless, with time varying prediction quality, a tradeoff between load flattening and accurate proactive service emerges.In this work, we formulate and investigate the optimal proactive caching design under time-varying predictions. Our objective is to minimize the time average expected service cost given a finite proactive service window. We establish a lower bound on the minimal achievable cost by any proactive caching policy, then we develop a low complexity caching policy that strikes a balance between load flattening and accurate caching. We prove that our proposed policy is asymptotically optimal as the proactive service window grows. In addition, we characterize other non-asymptotic cases where the proposed policy remains optimal. We validate our analytical results with numerical simulation and highlight relevant insights.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.