Abstract

In the recent decade, deep learning techniques have been widely adopted for founding artificial Intelligent applications, which led to successes in many data analysis tasks, such as risk assessment, medical predictions, and face recognition. Since the effectiveness of deep learning is directly proportional to the amount of data available, a large-scale collection of massive data is essential. Considering privacy and security concerns often prevent data owners from contributing sensitive data for training, researchers proposed several techniques to provide privacy guarantees of data in machine learning systems that contains multiple parties. However, all these works incurred frequent interactions between data owners during training, such that they came at a high communicational cost for data owners. To this end, in this article, we propose a new server-aid framework called non-interactive privacy-preserving multi-party machine learning (NPMML), which supports secure machine learning tasks without the participation of data owners. The NPMML framework significantly reduces data owners’ communicational overheads in multi-party machine learning. Moreover, we design a concrete construction for multi-layer neural networks based on NPMML. Finally, we evaluate the performance of NPMML by prototype implementation. The experimental result demonstrates that NPMML is communicational-efficient for data owners.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call