Deep Learning is ubiquitous today and is increasingly moving from the cloud down to the edge of networked infrastructures, where it enables embedded applications to perform complex inference tasks close to the data sources, reducing long-distance data movement and alleviating the need for a powerful cloud infrastructure. Edge-class multi-processor system on chip (MPSoC) devices featuring an on-chip FPGA fabric offer key advantages for Deep Learning inference tasks, especially for complex applications where multiple models may be run concurrently in the same platform. In this work, we propose an approach and a practical framework for the systematic characterization of multithreaded Deep Learning inference on edge FPGA MPSoCs. We instantiate the framework into a real-world MPSoC platform, targeting Xilinx Vitis-AI as a representative example of a commercial Deep Learning acceleration toolkit for edge environments. We design a comprehensive experimental campaign and apply it to the platform for several convolutional neural networks, each trained on three different datasets. We show that our approach can be used for both hardware- and software-level analysis of a target system. Among other findings, the analysis revealed a suboptimal behavior of the underlying toolkit runtime, involving the utilization of the accelerator cores and the uneven software latency of the support library, influenced by the shapes of the input tensors.
Read full abstract