Abstract

Applications running on Network on Chip (NoC) based multicore systems demand increased on-chip network bandwidth that can cater to the need for intensive communication among the cores and caches. Due to strict area and power budget, the bandwidth offered by NoC is very limited. Data-intensive and communication-centric applications on encountering a cache miss lead to a considerable burden on the underlying network for transferring blocks from multiple cache hierarchies to the requesting core as packets. This increases the packet transmission latency, thereby slowing down the system performance. Also, NoC being the highest component of power consumption after the cores, an increase in packets increases the dynamic power consumption of NoC. The article proposes FlitZip that addresses the problem by reducing on-chip traffic through compressing network packets. Hence, the compressed packet requires less bandwidth during its transfer, reducing the network's power consumption. Experimental analysis shows that FlitZip achieves a better compression ratio of 52 percent, reduces packet latency and bandwidth utilization by 19.28 and 27 percent, respectively. It also reduces the area and power consumption of the de/compression units by 53.33 and 62.3 percent, respectively, compared to the state-of-the-art packet compression technique, NoΔ.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call