Abstract

AI and big data applications with large memory footprints are rapidly increasing. The memory footprint of applications may exceed the main memory, and accordingly, the need for high-speed processing of large data is increasing. This paper proposes a method to utilize a DRAM-based Network of DRAM (NoD) as a swap space. The Swap transfers data in units of 4KB pages, and conventionally, disks use 512 or 4096 bytes as a minimum data access unit. However, by utilizing DRAM's byte-addressability characteristic and latency hiding technique, necessary data can be read from the swap space in 64 bytes. For NoD architecture verification, a full system experiment environment is built using gem5 and DRAMsim3 simulators. For applications with frequent swap operations (XSBench, HPCG, and Ibm), the proposed technique achieves an average performance improvement of 12.3%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call