Abstract

Using cache memory the overall memory access time to fetch data gets reduced. As use of cache memory is related to a system's performance, the caching process should take less time. To speed up caching process, there are many cache optimization techniques available. Some of the cache optimization process are Reducing Miss Rate, Reducing Miss Penalty, Re-ducing the time to hit in the cache etc. Re-cent advancement paved way for compressing data in cache, accessing recent data use pat-tern etc. All the techniques focus on increasing cache capacity or replacement policies in cache resulting in more hit ratio. There are many cache related compression and optimization techniques available which address only capacity and replacement related optimization and their related issues. This paper deals with scheduling the requests of cache memory as per compressed cache organization. So that cache searching and indexing speed gets reduced considerably and service the request in a faster manner. For capacity and replacement improvements Dictionary sharing based caching is used. Through this scheme multiple requests are foreseen using pre-fetcher and are searched as per cache organization, promoting easier indexing process.The benefit comes from both compressed storage and also easier storage ac-cess.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call