Abstract

Artificially Intelligent (AI) systems have become part of society. They support decision-making processes and take decisions autonomously. Due to their ability to analyze very large data sets, they can be superior to humans in certain aspects and they will have a major impact on people's lives. Tensions with societal, legal norms or constitutional values have already arisen. They can be observed in different scenarios, ranging from credit scoring and autonomous driving to social profiling and predictive policing. The lack of transparency of AI systems, which were considered as “black-boxes,” has already been identified as a problem. These so-called “black boxes” have been a weak spot for those who develop or use an AI system. This paper demystifies AI by discussing the role of transparency and explainability of machine learning. The paper describes different aspects of transparency and explains methods to increase our understanding of the behavior of AI systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call