DRAM memory controller plays a critical role in maximizing the performance of High Bandwidth Memory by efficiently managing data transfers between the CPU and the memory modules. Thus, they are suitable for low-power data-intensive applications. However, the complexity of DRAM command scheduling combined with tag management overheads in the cache makes the design of in-package cache controllers significantly challenging. Traditional memory controllers often face challenges with static, inflexible access scheduling designed for general applications, leading to suboptimal performance in dynamic environments. On the other hand, while some advanced controllers employing reinforcement learning offer adaptability to workload fluctuations, they tend to introduce hardware complexity and incur longer training latencies. Our approach aims to design low-power and efficient DRAM cache controllers using a machine learning model that dynamically adjusts to workload changes with optimized hardware efficiency and reduced training time. Therefore, we propose a machine learning approach to design low-power and efficient DRAM cache controllers that will leverage the trained model to produce optimal cache command schedules at runtime. The model considers several conditions for each request queue, and chooses the best response among the options. The simulation results show the superiority of our proposed design over the previous algorithms in a set of twelve data-intensive applications; our model is able to improve the performance up to 40% in some cases, and an average of 15% in performance and 10% in power consumption.
Read full abstract