We introduce a framework and provably-efficient schemes for ‘fresh’ caching at the (front-end) local cache of content that is subject to ‘dynamic’ updates at the (back-end) database. We start by formulating the hard-cache-constrained problem for this setting, which quickly becomes intractable due to the limited cache. To bypass this challenge, we first propose a flexible time-based-eviction model to derive the average system cost function that measures the system’s cost due to the service of aging content in addition to the regular cache miss cost. Next, we solve the cache-unconstrained case, which reveals how the refresh dynamics and popularity of content affect optimal caching. Then, we extend our approach to a soft-cache-constrained version, where we can guarantee that the cache use is limited with arbitrarily high probability. The corresponding solution reveals the interesting insight that ‘whether to cache an item or not in the local cache?’ depends primarily on its popularity level and channel reliability, whereas ‘how long the cached item should be held in the cache before eviction?’ depends primarily on its refresh rate. Moreover, we investigate the cost-cache saving trade-offs and prove that substantial cache gains can be obtained while also asymptotically achieving the minimum cost as the database size grows.
Read full abstract