Abstract

Memory Gap has became an essential factor influencing on achieving peak performance of high-speed CPU based systems. To fill this gap, enlarging cache capacity is a traditional method formed from the program locality principle. However, the order of instructions stored in I-Cache before being sent to the Data Processing Unit (DPU) is a type of useful information that has never been utilized. So we propose an architecture containing an Instruction Processing Unit (IPU) in parallel with the ordinary DPU. The IPU can prefetch, analyze and preprocess a large amount of instructions which would otherwise lie in the I-Cache untouched. It would be more efficient than the conventional prefetch buffer that can only store a few instructions for previewing. With IPU, load instructions can be preprocessed while the DPU is executing data simultaneously. We term it as Lookahead Cache. The paper describes the principle of Lookahead Cache, presents the idea of dynamic program locality and illustrates quantitative parameters for evaluation. Tools for simulating the Lookahead Cache were developed. Simulation results shows that it can improve the program locality, and hence improve the cache hit ratio during program execution without further enlarging the on-chip cache that occupies a large portion of chip area.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.