Abstract

With the advent of the Internet of Everything, the combination of AI (Artificial Intelligence) and edge computing has become a new research hotspot, and edge intelligence has emerged, which enables network edge devices to analyze data through AI algorithms. Since the edge computing environment is more complex and variable than cloud computing, there are many issues in building edge intelligence, such as lack of quantitative evaluation criteria, heterogeneous computing platforms, complex network topologies, and changing user requirements. To analyze the performance of edge intelligence workloads running on heterogeneous hardware platform, we target machine learning workloads in edge intelligence and analyze the impact of algorithm model complexity, edge data characteristics and heterogeneous platform differences in edge intelligence in terms of relative performance. By analyzing the machine learning workload in edge intelligence, we find that the inference time and memory usage of a model can be predicted based on the amount of computation and number of parameters of the model. Moreover, image complexity, edge data network features, and batch size all affect the performance of edge intelligence workloads. Furthermore, the upper limit of model performance on the same computing platform is limited by hardware resources. And finally, the model performance of a platform depends on its own computing power and bandwidth.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call