Abstract
Given the complexity of the application domain, the qualitative and quantifiable nature of the concepts involved, the wide heterogeneity and granularity of trustworthy attributes, and in some cases the non-comparability of the latter, assessing the trustworthiness of AI-based systems is a challenging process. In order to overcome these challenges, the Confiance.ai program proposes an innovative solution based on a Multi-Criteria Decision Aiding (MCDA) methodology. This approach involves several stages: framing trustworthiness as a set of well-defined attributes, exploring attributes to determine related Key Performance Indicators (KPI) or metrics, selecting evaluation protocols, and defining a method to aggregate multiple criteria to estimate an overall assessment of trust. This approach is illustrated by applying the RUM methodology (Robustness, Uncertainty, Monitoring) to ML context, while the focus on aggregation methods are based on Tropical Algebra.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.