Abstract

Privacy concerns exist when the central server has copies of datasets. Hence, there is a paradigm shift for learning networks to change from centralized in-cloud learning to distributed on-device learning. Benefitting from parallel computing, on-device learning networks have a lower bandwidth requirement than in-cloud learning networks. Moreover, on-device learning networks also have several desirable characteristics such as privacy preserving and flexibility. However, on-device learning networks are vulnerable to the malfunctioning terminals across the networks. The worst-case malfunctioning terminals are the Byzantine adversaries, that can perform arbitrary harmful operations to compromise the learned model based on the full knowledge of the networks. Hence, the design of secure learning algorithms becomes an emerging topic in the on-device learning networks with Byzantine adversaries. In this article, we present a comprehensive overview of the prevalent secure learning algorithms for the two promising on-device learning networks: Federated-Learning networks and decentralized-learning networks. We also review several future research directions in the Federated- Learning and decentralized-learning networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call