Abstract
This paper addresses the problem of optimum memory allocation for applications that cannot tolerate high access latency and cannot afford a huge chunk of process memory for a cache. It shows that by using estimation and filtering algorithms, memory usage can be controlled, while guaranteeing adequate performance. This adaptive management approach dynamically and constantly balances the memory and performance components of a cache in the face of evolving and changing access patterns. The policy is to monitor the hit ratio and memory in use and apply this knowledge to decide among (a) granting more memory, (b) replacing objects in cache, and (c) relinquishing memory. This approach has constant overhead independent of cache size and is less vulnerable to changing access patterns. This paper examines the efficacy of the traditional cache against the adaptive cache using pseudo-Monte Carlo simulation and concludes that the adaptive cache outperforms the other and eliminates the need for access pattern-based tuning.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.