Abstract

We aim to reduce contention caused by multiple aggressive prefetchers on shared resources (e.g., LLC and memory bandwidth) with a multi-agent reinforcement learning scheme. The agent finds what prefetchers to use and determines how aggressive they should be at any time during the execution. To do so, we utilize a highly scalable action branching agent that features a shared network module followed by several network branches. The shared network tracks the overall state of the processor while each branch in the network branches focuses on one specific prefetcher. We train the network by running 20 randomly mixed benchmarks, and measure the performance for 100 unseen mixes. Our experimental results show that our proposed method reduces the memory bandwidth by 19% while delivering similar performance compared to a very competitive baseline.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call