Abstract

Many real-world applications are widely adopting the edge computing paradigm due to its low latency and better privacy protection. With notable success in AI and deep learning (DL), edge devices and AI accelerators play a crucial role in deploying DL inference services at the edge of the Internet. While prior works quantified various edge devices’ efficiency, most studies focused on the performance of edge devices with single DL tasks. Therefore, there is an urgent need to investigate AI multi-tenancy on edge devices, required by many advanced DL applications for edge computing. This work investigates two techniques – concurrent model executions and dynamic model placements – for AI multi-tenancy on edge devices. With image classification as an example scenario, we empirically evaluate AI multi-tenancy on various edge devices, AI accelerators, and DL frameworks to identify its benefits and limitations. Our results show that multi-tenancy significantly improves DL inference throughput by up to 3.3 × − 3.8 × on Jetson TX2. These AI multi-tenancy techniques also open up new opportunities for flexible deployment of multiple DL services on edge devices and AI accelerators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call