Abstract

Caching strategies can improve the overall performance of a system by allowing the fast processor and slow memory to at a same pace. One important factor in caching is the replacement policy. Advancement in technology results in evolution of a huge number of techniques and algorithms implemented to improve cache performance. In this paper, analysis is done on different cache optimization techniques as well as replacement algorithms. Furthermore this paper presents a comprehensive statistical comparison of cache optimization techniques.To the best of our knowledge there is no numerical measure which can tell us the rating of specific cache optimization technique. We tried to come up with such a numerical figure. By statistical comparison we find out which technique is more consistent among all others. For said purpose we calculated mean and CV (Coefficient of Variation). CV tells us about which technique is more consistent. Comparative analysis of different techniques shows that victim cache has more consistent technique among all.

Highlights

  • Cache is a high speed memory which is not as costly as registers but it is faster when compared to main memory

  • The main purpose of cache memory is to reduce the speed gap between slow memory and fast processor at a reduced cost [1]. It mostly consists of most recently accessed piece of main memory.All information is stored in some storage media like main memory.Whenever CPU/processor use some data or piece of information it is copied into some faster storage media like cache. when processor try to approach a particular piece of information again, the system checks it in cache first, if it is in cache processor use it from there if not found in cache it must be brought from main memory and copy it into cache assuming we will need it again

  • Cache optimization techniques are classified on the basis of their application and comparison is made in their own domain to ensure homogeneity prevails.Cache optimization is achieved by reducing hit time, reducing miss penalty, increasing cache bandwidth and by reducing miss rate

Read more

Summary

INTRODUCTION

Cache is a high speed memory which is not as costly as registers but it is faster when compared to main memory. The main purpose of cache memory is to reduce the speed gap between slow memory and fast processor at a reduced cost [1]. It mostly consists of most recently accessed piece of main memory.All information is stored in some storage media like main memory.Whenever CPU/processor use some data or piece of information it is copied into some faster storage media like cache. Whenever desired chunk of information whether it is data or instruction is present in cache this situation is called cache hitand time taken to find out whether it is present in cache or not is called hit latency [4].If required data is not found in cache it would be brought into the cache from main memory this situation is called cache miss [5].

REPLACEMENTALGORITHMS
OPTIMIZATION TECHNIQUES
PERFORMANCE EVALUATION
DISCUSSIONS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call