Abstract

Performance plays a vital role in distributed systems. To improve the performance of an I/O system, there is a great demand in multilevel cache over a single cache because of its efficiency. Many policies of multilevel cache are present still there is a performance issue. 1) These policies fail to select optimally cache block for replacement i.e. fail to select a victim and 2)increases redundancy causes cache pollution at lower level. These policies include LFU [1], LRU-K [2], PROMOTE [3], DEMOTE [4], Multi Queue (MQ) [5]. In this paper we introduced compressed cache management policy which will address drawbacks of above defined policies by considering three factors together to replace or update cache block in a multilevel cache hierarchy, secondly it prevents cache pollution at lower level cache. First factor is recency of object in a cache i.e. how nearly object is used, second is frequency i.e. How repeatedly promotion and demotion of cache block takes place in a cache. Third is considering object size, the object with largest block size and less recency and frequency will be evicted first. This policy is capable of selecting a cache block efficiently and thus gives higher hit ratio compared to other present multilevel cache policies

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call