Abstract

With the vigorous development and gradual maturity of machine learning (ML) technologies, the AI-assisted disease diagnosis and prediction ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathcal {AADP}$ </tex-math></inline-formula> ) system has been extensively studied and can be expected to be intensively deployed in the real world. However, as the scale of ML data increases exponentially, the training and application of ML models impose a great burden on resource-constrained terminals. Designing cloud/edge server-aided <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathcal {AADP}$ </tex-math></inline-formula> protocols is becoming a popular topic. Whereas, the sensitivity of ML data, the intellectual property of ML models, and the uncontrollability of servers bring great security challenges to this promising computing paradigm. In this article, we initialize a new four-party framework for the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathcal {AADP}$ </tex-math></inline-formula> system which consists of users, third-party test institution, AI doctor, and cloud/edge server. With this framework, we design two high-efficiency and secure outsourcing <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\mathcal {AADP}$ </tex-math></inline-formula> protocols under two different security models. By comprehensively employing secure hash functions, Householder transformations, and random permutations, we realize the following design objectives: 1) user’s actual identification is invisible to the other parties; 2) user’s feature vector is blinded to the AI doctor and the server; 3) the ML model of the AI doctor is confidential to the server; 4) AI doctor can obtain decent computational savings compared with achieving the diagnosis task by itself; and 5) AI doctor can verify the server’s misbehaviors with a nonnegligible probability under the security model with a fully malicious server. We argue these claims with rigid theoretical proofs and corroborate them with extensive experimental analysis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call