Abstract

Abstract In this paper, we propose a new virtual cache architecture that reduces memory latency by encompassing the merits of both direct-mapped cache and set-associative cache. The entire cache memory is divided into n banks, and the operating system assigns one of the banks to a process when it is created. Then, each process runs on the assigned bank, and the bank behaves like in a direct-mapped cache. If a cache miss occurs in the active home bank, then the data will be fetched either from other banks or from the main memory like a set-associative cache. A victim for cache replacement is selected from those that belong to a process which is most remote from being scheduled. Trace-driven simulations confirm that the new scheme removes almost as many conflict misses as does the set-associative cache, while cache access time is similar to a direct-mapped cache.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call