Abstract

In the last decade it became increasingly apparent the inability of technical metrics such as accuracy, sustainability, and non-regressiveness to well characterize the behavior of intelligent systems. In fact, they are nowadays requested to meet also ethical requirements such as explainability, fairness, robustness, and privacy increasing our trust in their use in the wild. Of course often technical and ethical metrics are in tension between each other but the final goal is to be able to develop a new generation of more responsible and trustworthy machine learning. In this paper, we focus our attention on machine learning algorithms and associated predictive models, questioning for the first time, from a theoretical perspective, if it is possible to simultaneously guarantee their performance in terms of both technical and ethical metrics towards machine learning algorithms that we can trust. In particular, we will investigate for the first time both theory and practice of deterministic and randomized algorithms and associated predictive models showing the advantages and disadvantages of the different approaches. For this purpose we will leverage the most recent advances coming from the statistical learning theory: Complexity-Based Methods, Distribution Stability, PAC-Bayes, and Differential Privacy. Results will show that it is possible to develop consistent algorithms which generate predictive models with guarantees on multiple trustworthiness metrics.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.