In this paper, we investigate ‘optimistic’ online caching policies, distinguished by their use of future request predictions derived, for example, from machine learning models. Traditional online optimistic policies, grounded in the Follow-The-Regularized-Leader (FTRL) algorithm, incur a higher computational cost compared to classic policies like Least Frequently Used (LFU) and Least Recently Used (LRU). This is due to each cache state update necessitating the resolution of a constrained optimization problem. To address this problem, we introduce and analyze the ‘batched’ version of two distinct FTRL-based optimistic policies. In this approach, the cache updates occur less frequently, thereby amortizing the update cost over time or over multiple requests. Rather than updating the cache with each new request, the system accumulates a batch of requests before modifying the cache content. First, we present a batched version of the Optimistic Bipartite Caching (OBC) algorithm, that works for single requests, then we introduce a new optimistic batched caching policy, the Per-Coordinate Optimistic Caching (PCOC) algorithm, derived from the per-coordinate-based FTRL. We demonstrate that these online algorithms maintain ‘vanishing regret’ in the batched case, meaning their average performance approaches over time that of an optimal static file allocation, regardless of the sequence of file requests. We then compare the performance of these two strategies with each other and against optimistic versions of LFU and LRU. Our experimental results indicate that this batched optimistic approach outperforms traditional caching policies on both stationary and real-world file request traces.
Read full abstract