Abstract

Artificially Intelligent (AI) systems have become part of society. They support decision-making processes and take decisions autonomously. Due to their ability to analyze very large data sets, they can be superior to humans in certain aspects and they will have a major impact on people's lives. Tensions with societal, legal norms or constitutional values have already arisen. They can be observed in different scenarios, ranging from credit scoring and autonomous driving to social profiling and predictive policing. The lack of transparency of AI systems, which were considered as “black-boxes,” has already been identified as a problem. These so-called “black boxes” have been a weak spot for those who develop or use an AI system. This paper demystifies AI by discussing the role of transparency and explainability of machine learning. The paper describes different aspects of transparency and explains methods to increase our understanding of the behavior of AI systems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.