Abstract

Transparency has become a critical need in machine learning (ML) applications. Designing transparent ML models helps increase trust, ensure accountability, and scrutinize fairness. Some organizations may opt-out of transparency to protect individuals' privacy. Therefore, there is a great demand for transparency models that consider both privacy and security risks. Such transparency models can motivate organizations to improve their credibility by making the ML-based decision-making process comprehensible to end-users. Differential privacy (DP) provides an important technique to disclose information while protecting individual privacy. However, it has been shown that DP alone cannot prevent certain types of privacy attacks against disclosed ML models. DP with low ϵ values can provide high privacy guarantees, but may result in significantly weaker ML models in terms of accuracy. On the other hand, setting ϵ value too high may lead to successful privacy attacks. This raises the question whether we can disclose accurate transparent ML models while preserving privacy. In this paper we introduce a novel technique that complements DP to ensure model transparency and accuracy while being robust against model inversion attacks. We show that combining the proposed technique with DP provide highly transparent and accurate ML models while preserving privacy against model inversion attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call