Abstract

As performance and energy-efficiency improvements from technology scaling are slowing down, new technologies are being researched in hopes of disrupting results. Domain wall memory (DWM) is an emerging non-volatile technology that promises extreme data density, fast access times and low power consumption. However, DWM access time depends on the memory location distance from access ports, requiring expensive shifting. This causes overheads on performance and energy consumption. In this article, we implement our previously proposed shift-reducing instruction memory placement (SHRIMP) on a RISC-V core in RTL, provide the first thorough evaluation of the control logic required for DWM and SHRIMP and evaluate the effects on system energy and energy-efficiency. SHRIMP reduces the number of shifts by 36% on average compared to a linear placement in CHStone and Coremark benchmark suites when evaluated on the RISC-V processor system. The reduced shift amount leads to an average reduction of 14% in cycle counts compared to the linear placement. When compared to an SRAM-based system, although increasing memory usage by 26%, DWM with SHRIMP allows a 73% reduction in memory energy and 42% relative energy delay product. We estimate overall energy reductions of 14%, 15% and 19% in three example embedded systems.

Highlights

  • While application complexity and requirements for processing performance keep increasing, it is becoming increasingly difficult to respond to those requirements [1]

  • To evaluate the effect of Domain wall memory (DWM) without shift-reducing instruction memory placement (SHRIMP), the baseline SRAM is replaced with a 64 KiB DWM in Fig. 5b, and linear placement is used in this setup

  • In this paper we extended the evaluation of our previously proposed SHRIMP method, the first instruction placement strategy designed for DWM technology

Read more

Summary

Introduction

While application complexity and requirements for processing performance keep increasing, it is becoming increasingly difficult to respond to those requirements [1]. While technology advances have long relied on improving performance, silicon area utilization and energy consumption by scaling down technology nodes, this is getting more difficult due to phenomena such as the electron tunneling effect. To enable improvements for future processor systems, researchers are looking into emerging technologies. These utilize fundamentally different technologies or materials when compared to the “traditionally” used ones. While processing performance has increased, memory systems have advanced at a slower rate. This has resulted in systems being limited by memory latency and bandwidth. As the amounts of data required to be processed grow,

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call