Abstract

Although the maturity of technologies based on Artificial Intelligence (AI) is rather advanced nowadays, their adoption, deployment and application are not as wide as it could be expected. This could be attributed to many barriers, among which the lack of trust of users stands out. Accountability is a relevant factor to progress in this trustworthiness aspect, as it allows to determine the causes that derived a given decision or suggestion made by an AI system. This article focuses on the accountability of a specific branch of AI, statistical machine learning (ML), based on a semantic approach. FIDES, an ontology-based approach towards achieving the accountability of ML systems is presented, where all the relevant information related to a ML-based model is semantically annotated, from the dataset and model parametrisation to deployment aspects, to be exploited later to answer issues related to reproducibility, replicability, definitely, accountability. The feasibility of the proposed approach has been demonstrated in two scenarios, real-world energy efficiency and manufacturing, and it is expected to pave the way towards raising awareness about the potential of Semantic Technologies in different factors that may be key in the trustworthiness of AI-based systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call