Abstract

Empowered by promising artificial intelligence, the traditional Internet of Things is evolving into the Artificial Intelligence of Things (AIoT), which is an important enabling technology for Industry 4.0. Collaborative learning is a key technology for AIoT to build machine learning (ML) models on distributed datasets. However, there are two critical concerns of collaborative learning for AIoT: privacy leakage of sensitive data and dishonest computation. Specifically, data contains sensitive information of users, which cannot be openly shared for model learning. Furthermore, to protect the privacy of data or other selfish purposes, participants of collaborative learning may behave dishonestly, submitting dummy data or incorrect model computation. Therefore, it is important to guarantee privacy preservation of data and honest computation on collaborative learning. Our work tackles the two concerns wherein a model demander can securely train ML models with sensitive data and can regulate the computation of participants. To this end, we propose a secure and trusted collaborative learning framework called TrusCL. The framework guarantees privacy preservation via a delicate combination of homomorphic encryption (HE) and differential privacy (DP), achieving the trade-off between efficiency and accuracy. Furthermore, based on blockchain, in our design, the key steps of secure collaborative learning are recorded on blockchain so that malicious behaviors can be effectively tracked and choked in a timely manner to facilitate trusted computation. Experimental results validate the trade-off performance of Trus-CL between model training efficiency and trained model accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call