Abstract

Edge-based technologies have emerged as a key enabler to empower low-latency services and incorporate machine learning techniques for learning/inference. However, transferring user data to the edge server to conduct learning could violate data privacy and overburden the network. In addition, the server could receive multiple redundant tasks for inference which leads to redundant computations. In this article, we study both communication and computation issues in edge networks by emphasizing data privacy in a smart home scenario. We design an architecture that incorporates federated edge learning to promote data privacy and a node weighting and dropping scheme to select the appropriate participating devices with quality data and therefore improve the training and reduce communication cost. We further apply Long Short-Term Memory to predict future tasks and proactively store them locally at the edge device. We adopt the computation reuse concept to satisfy incoming tasks with less-to-no computation and thus eliminating redundant computation and further decreasing the computation cost. Simulation results based on real-world dataset show the effectiveness and efficiency of the proposed architecture. The training phase is reached with few iterations, while computation and communication are reduced by up to 80% and 70%, respectively, compared with existing schemes while data privacy is promoted.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call