Abstract

Inference based on deep learning models is usually implemented by exposing sensitive user data to the outside models, which of course gives rise to acute privacy concerns. To deal with these concerns, Dong et al. recently proposed an approach, namely the dropping-activation-outputs (DAO) first layer. This approach was claimed to be a non-invertible transformation, such that the privacy of user data could not be compromised. However, In this paper, we prove that the DAO first layer, in fact, can generally be inverted, and hence fails to preserve privacy. We also provide a countermeasure against the privacy vulnerabilities that we examined.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call