Abstract

Considerable experimental evidence has been accumulated showing that the performance of programs in virtual memory environments can be significantly improved by restructuring the programs, i.e., by modifying their block-to-page or block-to-segment mapping. This evidence also points out that the so-called strategy-oriented algorithms, which base their decisions on the knowledge of the memory management strategy under which the program will run, are more efficient than those algorithms which do not take this strategy into account. We present here some theoretical arguments to explain why strategy-oriented algorithms perform better than other program restructuring algorithms and determine the conditions under with these algorithms are optimum. In particular, we prove that the algorithms oriented towards the working set or sampled working set policy are optimum when applied to programs having no more than two blocks per page, and that, when this restriction is removed, they minimize both upper and lower bounds of the performance index they consider as the figure of merit to be reduced. We also prove that the restructuring algorithms aimed at reducing the page fault rate of programs to be run under such policies as LRU, Global LRU and PFF (the Page Fault Frequency policy) minimize an upper bound of the page fault rate, and we extend some of our results to some non-strategy-oriented algorithms. Throughout the paper, the only assumption about program behavior is that it can be accurately modeled as a stationary discrete-state stochastic process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call