Abstract

Efficient utilization of restrained memory resources is of paramount importance in CPU-FPGA heterogeneous multiprocessor system-on-chip (HMPSoC)-based system design for memory-intensive applications. State-of-the-art high level synthesis (HLS) tools rely on the system programmers to manually determine the data placement within the complex memory hierarchy. Different data placement policies may lead to different system performance, and finding an optimal data placement policy is a nontrivial problem. For instance, we show counter-intuitive results that traditional frequency and locality-based data placement strategy designed for CPU architecture leads to nonoptimal system performance in CPU-FPGA HMPSoCs. In this work, we first propose an automatic data placement framework for field programmable gate array (FPGA) kernels to determine whether each array object should be accessed via the on-chip BRAM, shared CPU L2-cache, or DDR memory to achieve the optimal performance. Moreover, we find that when the CPU kernel and the FPGA kernel are executed in parallel, memory contentions may degrade the performance and the optimal data placement policy designed for the FPGA kernel alone will not achieve the optimal overall system performance. In this article, we proposed to use cache partitioning to alleviate the impact brought by memory contentions. We extend the framework designed for FPGA by adding the cross-layer memory contentions analysis to automatically generate an optimal data placement policy and cache partitioning mechanism for the parallel executing kernels. The proposed data placement framework can be seamlessly integrated with the commercial Vivado HLS. The experimental results on the Zedboard platform show an average <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1.5\times $ </tex-math></inline-formula> performance speedup for FPGA kernels compared with a greedy-based allocation strategy. When FPGA kernels and CPU kernels are executed in parallel, the FPGA kernel and the CPU kernel have a performance speedup of <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1.62\times $ </tex-math></inline-formula> and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$1.10\times $ </tex-math></inline-formula> on average, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call