Abstract

With the advent of artificial intelligence, machine learning has been well explored and extensively applied into numerous fields, such as pattern recognition, image processing and cloud computing. Very recently, machine learning hosted in a cloud service has gained more attentions due to the benefits from the outsourcing paradigm. Based on cloud-aided computation techniques, the heavy computation tasks involved in machine learning process can be off-loaded into the cloud server in a pay-per-use manner, whereas outsourcing large-scale collection of sensitive data risks privacy leakage since the cloud server is semi-honest. Therefore, privacy preservation for the client and verification for the returned results become two challenges to be dealt with. In this paper, we focus on designing a novel privacy-preserving single-layer perceptron training scheme which supports batch patterns training and verification for the training results on the client side. In addition, adopting classical secure two-party computation method, we design a novel lightweight privacy-preserving predictive algorithm. Both two participants learns nothing about other’s inputs, and the calculation result is only known by one party. Detailed security analysis shows that the proposed scheme can achieve the desired security properties. We also demonstrate the efficiency of our scheme by providing the experimental evaluation on two different real datasets.

Highlights

  • According to the report that the quantity of available data generated will be exceed 15 zettabytes by 2020 compared with 0.9 zettabytes in 2013 Adshead (2014)

  • Machine learning has been extensively applied in plenty of research fields (Chang et al 2017a, b; Chang and Yang 2017; Chang et al 2017), such as spam classification Yu and Xu (2008), disease diagnosis Fakoor et al (2013), credit-risk assessment Yu et al (2008)

  • The client should have the ability to check the validity of the returned result, which is a necessity in cloud-based single-layer perceptron (SLP) training process

Read more

Summary

Introduction

According to the report that the quantity of available data generated will be exceed 15 zettabytes by 2020 compared with 0.9 zettabytes in 2013 Adshead (2014). Due to the limited local storage and computing resources, cloud-based machine learning paradigm is becoming a newly developing research area. Cloud computing makes it possible to view computing as a kind of resource (Chen and Zhong 2009; Chen et al 2016, 2015a, b, 2014a, b). For some reasons such as hardware failures, software bugs or even malicious attacks, the cloud server may return a computationally indistinguishable result In this case, the client should have the ability to check the validity of the returned result, which is a necessity in cloud-based SLP training process. Each medical sample is encrypted before uploading to the cloud server, which costs O(n3) on the hospital (client) side. It is urgent and necessary to design an efficient and secure SLP training scheme which satisfies the aforementioned requirements

Contributions
Related work
Preliminaries
Mini-batch SLP training algorithm
8: Return
Privacy-preserving method for outsourcing matrix multiplication
Secure dot-product protocol
System model
Security model
High description
Verifiable privacy-preserving SLP training scheme
Correctness
Security and efficiency analysis
3: The client computes
Efficiency analysis
Performance evaluation
Compliance with ethical standards
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call