Abstract

In this paper, we introduce DP-EBM*, an enhanced utility version of the Differentially Private Explainable Boosting Machine (DP-EBM). DP-EBM* offers predictions for both classification and regression tasks, providing inherent explanations for its predictions while ensuring the protection of sensitive individual information via Differential Privacy. DP-EBM* has two major improvements over DP-EBM. Firstly, we develop an error measure to assess the efficiency of using privacy budget, a crucial factor to accuracy, and optimize this measure. Secondly, we propose a feature pruning method, which eliminates less important features during the training process. Our experimental results demonstrate that DP-EBM* outperforms the state-of-the-art differentially private explainable models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.