Abstract

As processor performance is reaching the level of executing a single instruction in 1 ns, long memory latencies have become a critical problem, because a single memory access could stall the execution of hundreds of instructions. A recently announced channeled-DRAM approaches this problem by integrating a small low-latency buffer, called channels, in front of a DRAM core in order to reduce the effective memory latency. Since the channels can provide intrinsically faster access than that of a bare DRAM core when they hit, key considerations in this architecture become (1) how to achieve high channel hit rates and (2) how to reduce the channel-miss latencies. Since channeled-DRAMs rely on an external memory controller to handle all the channel management, design of the memory controller heavily dominates the first issue. In this paper, we propose two novel techniques for reducing the channel-miss latencies: direct background operation and delayed channel replacement. We examined these techniques in a future 256-Mb DRAM with a 200-MHz double-data-rate (DDR) synchronous interface. Both SPICE simulation results (that show channel-miss latency reduction) and system-level simulation results (that reveal system-level performance improvement) are presented.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call