Abstract

In this paper we consider a new caching model that enables data sharing for network services in a cost-effective way. The proposed caching algorithms are characterized by using monetary cost and access information to control the cache replacements, instead of exploiting capacity-oriented strategies as in traditional approaches. In particular, given a stream of requests to a shared data item with respect to a homogeneous cost model, we first propose a fast off-line algorithm using dynamic programming techniques, which can generate an optimal schedule within $O(mn)$ time-space complexity by using cache, migration as well as replication to serve a $n$ -length request sequence in a $m$ -node network, substantially improving the previous results. Furthermore, we also study the online form of this problem, and present an 3-competitive online algorithm by leveraging an idea of anticipatory caching. The algorithm can serve an online request in constant time and is space efficient in $O(m)$ as well, rendering it more practical in reality. We evaluate our algorithms, together with some variants, by conducting extensive simulation studies. Our results show that the optimal cost of the off-line algorithm is changed in a parabolic form as the ratio of caching cost to transfer cost is increased, and the online algorithm is less than 2 times worse in most cases than its optimal off-line counterpart.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call