Abstract

AI running locally on IoT Edge devices is called Edge AI. Federated Learning (FL) is a Machine Learning (ML) technique that builds upon the concept of distributed computing and preserves data privacy while still supporting trainable AI models. This paper evaluates the FL regarding practical CPU usage and training time. Additionally, the paper presents how biased IoT Edge clients affect the performance of an AI model. Existing literature on the performance of FL indicates that it is sensitive to imbalanced data distributions and does not easily converge in the presence of heterogeneous data. Furthermore, model training uses significant on-device resources, and low-power IoT devices cannot train complex ML models. This paper investigates optimal training parameters to make FL more performant and researches the use of model compression to make FL more accessible to IoT Edge devices. First, a flexible test environment is created that can emulate clients with biased data samples. Each compressed version of the ML model is used for FL. Evaluation is done regarding resources used and the overall ML model performance. Our current study shows an accuracy improvement of 1.16% from modifying training parameters, but a balance is needed to prevent overfitting. Model compression can reduce resource usage by 5.42% but tends to accelerate overfitting and increase model loss by 9.35%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call