Abstract

Deep learning is an important branch of artificial intelligence. However, the first task of deep learning is to collect data, which will seriously threaten user data privacy. The existing dual server privacy-preserving deep learning schemes operate under the assumption that two semi-honest servers will not collude, but this security assumption may be too robust. This paper proposes a privacy-preserving multiparty deep learning scheme based on a homomorphic proxy re-encryption scheme, which can resist collusion between the semi-honest servers. Introducing a fog node with high response and low latency characteristics as a proxy and leveraging a one-way homomorphic proxy re-encryption scheme to convert the user-end ciphertext to server-end ciphertext reduces the possibility of privacy leakage due to cooperation between two servers or participants and servers. To avoid a rise in the rounds of interaction caused by the increase in the number of participants, a multi-party random numbers aggregation method is proposed based on verifiable secret sharing. Ensuring the sensitive data remains undisclosed while enhancing the global model’s precision. Theoretical analysis and experimental evaluation have both demonstrated that this privacy-preserving deep learning scheme is equipped to handle multiple keys, resist collusion attacks, and achieve higher accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call